Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
NASA Astrophysics Data System (ADS)
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Juliane
MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.
Large-scale expensive black-box function optimization
NASA Astrophysics Data System (ADS)
Rashid, Kashif; Bailey, William; Couët, Benoît
2012-09-01
This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.
Using Reconstructed POD Modes as Turbulent Inflow for LES Wind Turbine Simulations
NASA Astrophysics Data System (ADS)
Nielson, Jordan; Bhaganagar, Kiran; Juttijudata, Vejapong; Sirisup, Sirod
2016-11-01
Currently, in order to get realistic atmospheric effects of turbulence, wind turbine LES simulations require computationally expensive precursor simulations. At times, the precursor simulation is more computationally expensive than the wind turbine simulation. The precursor simulations are important because they capture turbulence in the atmosphere and as stated above, turbulence impacts the power production estimation. On the other hand, POD analysis has been shown to be capable of capturing turbulent structures. The current study was performed to determine the plausibility of using lower dimension models from POD analysis of LES simulations as turbulent inflow to wind turbine LES simulations. The study will aid the wind energy community by lowering the computational cost of full scale wind turbine LES simulations, while maintaining a high level of turbulent information and being able to quickly apply the turbulent inflow to multi turbine wind farms. This will be done by comparing a pure LES precursor wind turbine simulation with simulations that use reduced POD mod inflow conditions. The study shows the feasibility of using lower dimension models as turbulent inflow of LES wind turbine simulations. Overall the power production estimation and velocity field of the wind turbine wake are well captured with small errors.
Space-filling designs for computer experiments: A review
Joseph, V. Roshan
2016-01-29
Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less
Space-filling designs for computer experiments: A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, V. Roshan
Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less
NASA Astrophysics Data System (ADS)
Miao, Linling; Young, Charles D.; Sing, Charles E.
2017-07-01
Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.
Use of off-the-shelf PC-based flight simulators for aviation human factors research.
DOT National Transportation Integrated Search
1996-04-01
Flight simulation has historically been an expensive proposition, particularly if out-the-window views were desired. Advances in computer technology have allowed a modular, off-the-shelf flight simulation (based on 80486 processors or Pentiums) to be...
DOT National Transportation Integrated Search
2013-01-01
The simulator was once a very expensive, large-scale mechanical device for training military pilots or astronauts. Modern computers, linking sophisticated software and large-screen displays, have yielded simulators for the desktop or configured as sm...
NASA Astrophysics Data System (ADS)
Philip, Sajeev; Martin, Randall V.; Keller, Christoph A.
2016-05-01
Chemistry-transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemistry-transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to operator duration. Subsequently, we compare the species simulated with operator durations from 10 to 60 min as typically used by global chemistry-transport models, and identify the operator durations that optimize both computational expense and simulation accuracy. We find that longer continuous transport operator duration increases concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production with longer transport operator duration. Longer chemical operator duration decreases sulfate and ammonium but increases nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by up to a factor of 5 from fine (5 min) to coarse (60 min) operator duration. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, secondary inorganic aerosols, ozone and carbon monoxide with a finer temporal or spatial resolution taken as "truth". Relative simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) operator duration. Chemical operator duration twice that of the transport operator duration offers more simulation accuracy per unit computation. However, the relative simulation error from coarser spatial resolution generally exceeds that from longer operator duration; e.g., degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different operator durations in offline chemistry-transport models. We encourage chemistry-transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.
NASA Astrophysics Data System (ADS)
Philip, S.; Martin, R. V.; Keller, C. A.
2015-11-01
Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different temporal resolutions in offline chemical transport models. We encourage the chemical transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Reinforce Networking Theory with OPNET Simulation
ERIC Educational Resources Information Center
Guo, Jinhua; Xiang, Weidong; Wang, Shengquan
2007-01-01
As networking systems have become more complex and expensive, hands-on experiments based on networking simulation have become essential for teaching the key computer networking topics to students. The simulation approach is the most cost effective and highly useful because it provides a virtual environment for an assortment of desirable features…
A meta-analysis of outcomes from the use of computer-simulated experiments in science education
NASA Astrophysics Data System (ADS)
Lejeune, John Van
The purpose of this study was to synthesize the findings from existing research on the effects of computer simulated experiments on students in science education. Results from 40 reports were integrated by the process of meta-analysis to examine the effect of computer-simulated experiments and interactive videodisc simulations on student achievement and attitudes. Findings indicated significant positive differences in both low-level and high-level achievement of students who use computer-simulated experiments and interactive videodisc simulations as compared to students who used more traditional learning activities. No significant differences in retention, student attitudes toward the subject, or toward the educational method were found. Based on the findings of this study, computer-simulated experiments and interactive videodisc simulations should be used to enhance students' learning in science, especially in cases where the use of traditional laboratory activities are expensive, dangerous, or impractical.
Molecular dynamics simulations and applications in computational toxicology and nanotoxicology.
Selvaraj, Chandrabose; Sakkiah, Sugunadevi; Tong, Weida; Hong, Huixiao
2018-02-01
Nanotoxicology studies toxicity of nanomaterials and has been widely applied in biomedical researches to explore toxicity of various biological systems. Investigating biological systems through in vivo and in vitro methods is expensive and time taking. Therefore, computational toxicology, a multi-discipline field that utilizes computational power and algorithms to examine toxicology of biological systems, has gained attractions to scientists. Molecular dynamics (MD) simulations of biomolecules such as proteins and DNA are popular for understanding of interactions between biological systems and chemicals in computational toxicology. In this paper, we review MD simulation methods, protocol for running MD simulations and their applications in studies of toxicity and nanotechnology. We also briefly summarize some popular software tools for execution of MD simulations. Published by Elsevier Ltd.
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
Paliwal, Himanshu; Shirts, Michael R
2013-11-12
Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments
Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...
2015-11-09
Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.
High-Fidelity Simulations of Electromagnetic Propagation and RF Communication Systems
2017-05-01
addition to high -fidelity RF propagation modeling, lower-fidelity mod- els, which are less computationally burdensome, are available via a C++ API...expensive to perform, requiring roughly one hour of computer time with 36 available cores and ray tracing per- formed by a single high -end GPU...ER D C TR -1 7- 2 Military Engineering Applied Research High -Fidelity Simulations of Electromagnetic Propagation and RF Communication
A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses
Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria
2013-01-01
Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367
Toe, Kyaw Kyar; Huang, Weimin; Yang, Tao; Duan, Yuping; Zhou, Jiayin; Su, Yi; Teo, Soo-Kng; Kumar, Selvaraj Senthil; Lim, Calvin Chi-Wan; Chui, Chee Kong; Chang, Stephen
2015-08-01
This work presents a surgical training system that incorporates cutting operation of soft tissue simulated based on a modified pre-computed linear elastic model in the Simulation Open Framework Architecture (SOFA) environment. A precomputed linear elastic model used for the simulation of soft tissue deformation involves computing the compliance matrix a priori based on the topological information of the mesh. While this process may require a few minutes to several hours, based on the number of vertices in the mesh, it needs only to be computed once and allows real-time computation of the subsequent soft tissue deformation. However, as the compliance matrix is based on the initial topology of the mesh, it does not allow any topological changes during simulation, such as cutting or tearing of the mesh. This work proposes a way to modify the pre-computed data by correcting the topological connectivity in the compliance matrix, without re-computing the compliance matrix which is computationally expensive.
CRITTERS! A Realistic Simulation for Teaching Evolutionary Biology
ERIC Educational Resources Information Center
Latham, Luke G., II; Scully, Erik P.
2008-01-01
Evolutionary processes can be studied in nature and in the laboratory, but time and financial constraints result in few opportunities for undergraduate and high school students to explore the agents of genetic change in populations. One alternative to time consuming and expensive teaching laboratories is the use of computer simulations. We…
Building an intelligent tutoring system for procedural domains
NASA Technical Reports Server (NTRS)
Warinner, Andrew; Barbee, Diann; Brandt, Larry; Chen, Tom; Maguire, John
1990-01-01
Jobs that require complex skills that are too expensive or dangerous to develop often use simulators in training. The strength of a simulator is its ability to mimic the 'real world', allowing students to explore and experiment. A good simulation helps the student develop a 'mental model' of the real world. The closer the simulation is to 'real life', the less difficulties there are transferring skills and mental models developed on the simulator to the real job. As graphics workstations increase in power and become more affordable they become attractive candidates for developing computer-based simulations for use in training. Computer based simulations can make training more interesting and accessible to the student.
Methods of sound simulation and applications in flight simulators
NASA Technical Reports Server (NTRS)
Gaertner, K. P.
1980-01-01
An overview of methods for electronically synthesizing sounds is presented. A given amount of hardware and computer capacity places an upper limit on the degree and fidelity of realism of sound simulation which is attainable. Good sound realism for aircraft simulators can be especially expensive because of the complexity of flight sounds and their changing patterns through time. Nevertheless, the flight simulator developed at the Research Institute for Human Engineering, West Germany, shows that it is possible to design an inexpensive sound simulator with the required acoustic properties using analog computer elements. The characteristics of the sub-sound elements produced by this sound simulator for take-off, cruise and approach are discussed.
Multidisciplinary propulsion simulation using the numerical propulsion system simulator (NPSS)
NASA Technical Reports Server (NTRS)
Claus, Russel W.
1994-01-01
Implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributions to the high cost is the need to perform many large scale system tests. The traditional design analysis procedure decomposes the engine into isolated components and focuses attention on each single physical discipline (e.g., fluid for structural dynamics). Consequently, the interactions that naturally occur between components and disciplines can be masked by the limited interactions that occur between individuals or teams doing the design and must be uncovered during expensive engine testing. This overview will discuss a cooperative effort of NASA, industry, and universities to integrate disciplines, components, and high performance computing into a Numerical propulsion System Simulator (NPSS).
Advanced computational simulations of water waves interacting with wave energy converters
NASA Astrophysics Data System (ADS)
Pathak, Ashish; Freniere, Cole; Raessi, Mehdi
2017-03-01
Wave energy converter (WEC) devices harness the renewable ocean wave energy and convert it into useful forms of energy, e.g. mechanical or electrical. This paper presents an advanced 3D computational framework to study the interaction between water waves and WEC devices. The computational tool solves the full Navier-Stokes equations and considers all important effects impacting the device performance. To enable large-scale simulations in fast turnaround times, the computational solver was developed in an MPI parallel framework. A fast multigrid preconditioned solver is introduced to solve the computationally expensive pressure Poisson equation. The computational solver was applied to two surface-piercing WEC geometries: bottom-hinged cylinder and flap. Their numerically simulated response was validated against experimental data. Additional simulations were conducted to investigate the applicability of Froude scaling in predicting full-scale WEC response from the model experiments.
A glacier runoff extension to the Precipitation Runoff Modeling System
A. E. Van Beusekom; R. J. Viger
2016-01-01
A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while...
Simulating the fate of fall- and spring-applied poultry litter nitrogen in corn production
USDA-ARS?s Scientific Manuscript database
Monitoring the fate of N derived from manures applied to fertilize crops is difficult, time consuming, and relatively expensive. But computer simulation models can help understand the interactions among various N processes in the soil-plant system and determine the fate of applied N. The RZWQM2 was ...
Signal decomposition for surrogate modeling of a constrained ultrasonic design space
NASA Astrophysics Data System (ADS)
Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.
2018-04-01
The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.
Susan Hummel; Maureen Kennedy; E. Ashley Steel
2012-01-01
Given that resource managers rely on computer simulation models when it is difficult or expensive to obtain vital information directly, it is important to evaluate how well a particular model satisfies applications for which it is designed. The Forest Vegetation Simulator (FVS) is used widely for forest management in the US, and its scope and complexity continue to...
Numerical Experiments with a Turbulent Single-Mode Rayleigh-Taylor Instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cloutman, L.D.
2000-04-01
Direct numerical simulation is a powerful tool for studying turbulent flows. Unfortunately, it is also computationally expensive and often beyond the reach of the largest, fastest computers. Consequently, a variety of turbulence models have been devised to allow tractable and affordable simulations of averaged flow fields. Unfortunately, these present a variety of practical difficulties, including the incorporation of varying degrees of empiricism and phenomenology, which leads to a lack of universality. This unsatisfactory state of affairs has led to the speculation that one can avoid the expense and bother of using a turbulence model by relying on the grid andmore » numerical diffusion of the computational fluid dynamics algorithm to introduce a spectral cutoff on the flow field and to provide dissipation at the grid scale, thereby mimicking two main effects of a large eddy simulation model. This paper shows numerical examples of a single-mode Rayleigh-Taylor instability in which this procedure produces questionable results. We then show a dramatic improvement when two simple subgrid-scale models are employed. This study also illustrates the extreme sensitivity to initial conditions that is a common feature of turbulent flows.« less
A glacier runoff extension to the Precipitation Runoff Modeling System
Van Beusekom, Ashley E.; Viger, Roland
2016-01-01
A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while maintaining model usability. PRMSglacier is validated on two basins in Alaska, Wolverine, and Gulkana Glacier basin, which have been studied since 1966 and have a substantial amount of data with which to test model performance over a long period of time covering a wide range of climatic and hydrologic conditions. When error in field measurements is considered, the Nash-Sutcliffe efficiencies of streamflow are 0.87 and 0.86, the absolute bias fractions of the winter mass balance simulations are 0.10 and 0.08, and the absolute bias fractions of the summer mass balances are 0.01 and 0.03, all computed over 42 years for the Wolverine and Gulkana Glacier basins, respectively. Without taking into account measurement error, the values are still within the range achieved by the more computationally expensive codes tested over shorter time periods.
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
SIMULATING ATMOSPHERIC EXPOSURE USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME
Multimedia Risk assessments require the temporal integration of atmospheric concentration and deposition estimates with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-ter...
Faculty Flow in a Medical School: A Policy Simulator. AIR Forum 1979 Paper.
ERIC Educational Resources Information Center
Kutina, Kenneth L.; Bruss, Edward A.
A computer-based simulation model is described that can be used in an interactive mode to analyze the effects of alternative hiring, promotion, tenure granting, retirement, and salary policies on faculty size, distribution, and aggregate salary expense. The model was designed to be adequately flexible and comprehensive to incorporate the array of…
Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Patera, Anthony
1993-01-01
Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.
Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements
Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.
2016-01-01
Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037
Simulation tools for robotics research and assessment
NASA Astrophysics Data System (ADS)
Fields, MaryAnne; Brewer, Ralph; Edge, Harris L.; Pusey, Jason L.; Weller, Ed; Patel, Dilip G.; DiBerardino, Charles A.
2016-05-01
The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component-level computational models to provide the necessary simulation fidelity for accuracy. However, the Perception domain remains the most problematic for adequate simulation performance due to the often cartoon nature of computer rendering and the inability to model realistic electromagnetic radiation effects, such as multiple reflections, in real-time.
Towards real-time photon Monte Carlo dose calculation in the cloud
NASA Astrophysics Data System (ADS)
Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-01
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Towards real-time photon Monte Carlo dose calculation in the cloud.
Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe
2017-06-07
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
NASA Astrophysics Data System (ADS)
Crowell, Andrew Rippetoe
This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.
cosmoabc: Likelihood-free inference for cosmology
NASA Astrophysics Data System (ADS)
Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.
2015-05-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.
Protein free energy landscapes from long equilibrium simulations
NASA Astrophysics Data System (ADS)
Piana-Agostinetti, Stefano
Many computational techniques based on molecular dynamics (MD) simulation can be used to generate data to aid in the construction of protein free energy landscapes with atomistic detail. Unbiased, long, equilibrium MD simulations--although computationally very expensive--are particularly appealing, as they can provide direct kinetic and thermodynamic information on the transitions between the states that populate a protein free energy surface. It can be challenging to know how to analyze and interpret even results generated by this direct technique, however. I will discuss approaches we have employed, using equilibrium MD simulation data, to obtain descriptions of the free energy landscapes of proteins ranging in size from tens to thousands of amino acids.
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.
2004-01-01
The computational simulation community is not routinely publishing independently verifiable tests to accompany new models or algorithms. A survey reveals that only 22% of new models published are accompanied by tests suitable for independently verifying the new model. As the community develops larger codes with increased functionality, and hence increased complexity in terms of the number of building block components and their interactions, it becomes prohibitively expensive for each development group to derive the appropriate tests for each component. Therefore, the computational simulation community is building its collective castle on a very shaky foundation of components with unpublished and unrepeatable verification tests. The computational simulation community needs to begin publishing component level verification tests before the tide of complexity undermines its foundation.
NASA Technical Reports Server (NTRS)
Westra, Doug G.; West, Jeffrey S.; Richardson, Brian R.
2015-01-01
Historically, the analysis and design of liquid rocket engines (LREs) has relied on full-scale testing and one-dimensional empirical tools. The testing is extremely expensive and the one-dimensional tools are not designed to capture the highly complex, and multi-dimensional features that are inherent to LREs. Recent advances in computational fluid dynamics (CFD) tools have made it possible to predict liquid rocket engine performance, stability, to assess the effect of complex flow features, and to evaluate injector-driven thermal environments, to mitigate the cost of testing. Extensive efforts to verify and validate these CFD tools have been conducted, to provide confidence for using them during the design cycle. Previous validation efforts have documented comparisons of predicted heat flux thermal environments with test data for a single element gaseous oxygen (GO2) and gaseous hydrogen (GH2) injector. The most notable validation effort was a comprehensive validation effort conducted by Tucker et al. [1], in which a number of different groups modeled a GO2/GH2 single element configuration by Pal et al [2]. The tools used for this validation comparison employed a range of algorithms, from both steady and unsteady Reynolds Averaged Navier-Stokes (U/RANS) calculations, large-eddy simulations (LES), detached eddy simulations (DES), and various combinations. A more recent effort by Thakur et al. [3] focused on using a state-of-the-art CFD simulation tool, Loci/STREAM, on a two-dimensional grid. Loci/STREAM was chosen because it has a unique, very efficient flamelet parameterization of combustion reactions that are too computationally expensive to simulate with conventional finite-rate chemistry calculations. The current effort focuses on further advancement of validation efforts, again using the Loci/STREAM tool with the flamelet parameterization, but this time with a three-dimensional grid. Comparisons to the Pal et al. heat flux data will be made for both RANS and Hybrid RANSLES/ Detached Eddy simulations (DES). Computation costs will be reported, along with comparison of accuracy and cost to much less expensive two-dimensional RANS simulations of the same geometry.
Gaussian process regression of chirplet decomposed ultrasonic B-scans of a simulated design case
NASA Astrophysics Data System (ADS)
Wertz, John; Homa, Laura; Welter, John; Sparkman, Daniel; Aldrin, John
2018-04-01
The US Air Force seeks to implement damage tolerant lifecycle management of composite structures. Nondestructive characterization of damage is a key input to this framework. One approach to characterization is model-based inversion of the ultrasonic response from damage features; however, the computational expense of modeling the ultrasonic waves within composites is a major hurdle to implementation. A surrogate forward model with sufficient accuracy and greater computational efficiency is therefore critical to enabling model-based inversion and damage characterization. In this work, a surrogate model is developed on the simulated ultrasonic response from delamination-like structures placed at different locations within a representative composite layup. The resulting B-scans are decomposed via the chirplet transform, and a Gaussian process model is trained on the chirplet parameters. The quality of the surrogate is tested by comparing the B-scan for a delamination configuration not represented within the training data set. The estimated B-scan has a maximum error of ˜15% for an estimated reduction in computational runtime of ˜95% for 200 function calls. This considerable reduction in computational expense makes full 3D characterization of impact damage tractable.
GPSS/360 computer models to simulate aircraft passenger emergency evacuations.
DOT National Transportation Integrated Search
1972-09-01
Live tests of emergency evacuation of transport aircraft are becoming increasingly expensive as the planes grow to a size seating hundreds of passengers. Repeated tests, to cope with random variations, increase these costs, as well as risks of injuri...
Learning Reverse Engineering and Simulation with Design Visualization
NASA Technical Reports Server (NTRS)
Hemsworth, Paul J.
2018-01-01
The Design Visualization (DV) group supports work at the Kennedy Space Center by utilizing metrology data with Computer-Aided Design (CAD) models and simulations to provide accurate visual representations that aid in decision-making. The capability to measure and simulate objects in real time helps to predict and avoid potential problems before they become expensive in addition to facilitating the planning of operations. I had the opportunity to work on existing and new models and simulations in support of DV and NASA’s Exploration Ground Systems (EGS).
Methods for simulation-based analysis of fluid-structure interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Matthew Franklin; Payne, Jeffrey L.
2005-10-01
Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonalmore » decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.« less
Multimedia risk assessments require the temporal integration of atmospheric concentration and deposition with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-term average a...
Reducing the Time and Cost of Testing Engines
NASA Technical Reports Server (NTRS)
2004-01-01
Producing a new aircraft engine currently costs approximately $1 billion, with 3 years of development time for a commercial engine and 10 years for a military engine. The high development time and cost make it extremely difficult to transition advanced technologies for cleaner, quieter, and more efficient new engines. To reduce this time and cost, NASA created a vision for the future where designers would use high-fidelity computer simulations early in the design process in order to resolve critical design issues before building the expensive engine hardware. To accomplish this vision, NASA's Glenn Research Center initiated a collaborative effort with the aerospace industry and academia to develop its Numerical Propulsion System Simulation (NPSS), an advanced engineering environment for the analysis and design of aerospace propulsion systems and components. Partners estimate that using NPSS has the potential to dramatically reduce the time, effort, and expense necessary to design and test jet engines by generating sophisticated computer simulations of an aerospace object or system. These simulations will permit an engineer to test various design options without having to conduct costly and time-consuming real-life tests. By accelerating and streamlining the engine system design analysis and test phases, NPSS facilitates bringing the final product to market faster. NASA's NPSS Version (V)1.X effort was a task within the Agency s Computational Aerospace Sciences project of the High Performance Computing and Communication program, which had a mission to accelerate the availability of high-performance computing hardware and software to the U.S. aerospace community for its use in design processes. The technology brings value back to NASA by improving methods of analyzing and testing space transportation components.
Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.
Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A
2016-03-21
Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Numerical propulsion system simulation
NASA Technical Reports Server (NTRS)
Lytle, John K.; Remaklus, David A.; Nichols, Lester D.
1990-01-01
The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.
Zhan, Yijian; Meschke, Günther
2017-07-08
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.
Zhan, Yijian
2017-01-01
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130
NASA Astrophysics Data System (ADS)
Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei
2018-02-01
The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.
Shao, Yu; Wang, Shumin
2016-12-01
The numerical simulation of acoustic scattering from elastic objects near a water-sand interface is critical to underwater target identification. Frequency-domain methods are computationally expensive, especially for large-scale broadband problems. A numerical technique is proposed to enable the efficient use of finite-difference time-domain method for broadband simulations. By incorporating a total-field/scattered-field boundary, the simulation domain is restricted inside a tightly bounded region. The incident field is further synthesized by the Fourier transform for both subcritical and supercritical incidences. Finally, the scattered far field is computed using a half-space Green's function. Numerical examples are further provided to demonstrate the accuracy and efficiency of the proposed technique.
Data-driven train set crash dynamics simulation
NASA Astrophysics Data System (ADS)
Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun
2017-02-01
Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.
Decision rules for unbiased inventory estimates
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Koch, D.
1979-01-01
An efficient and accurate procedure for estimating inventories from remote sensing scenes is presented. In place of the conventional and expensive full dimensional Bayes decision rule, a one-dimensional feature extraction and classification technique was employed. It is shown that this efficient decision rule can be used to develop unbiased inventory estimates and that for large sample sizes typical of satellite derived remote sensing scenes, resulting accuracies are comparable or superior to more expensive alternative procedures. Mathematical details of the procedure are provided in the body of the report and in the appendix. Results of a numerical simulation of the technique using statistics obtained from an observed LANDSAT scene are included. The simulation demonstrates the effectiveness of the technique in computing accurate inventory estimates.
Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods
NASA Astrophysics Data System (ADS)
Ren, Wei; Liu, Hong; Jin, Shi
2014-12-01
In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn → 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn → 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
The Self-Assembly of Particles with Multipolar Interactions
2004-01-01
the LATEX template in which this thesis has been written. I also thank Kevin Van Workum and Jack Douglas for contributing simulation work and some...of the computational expense of simulating such complex self-assembly systems at the molecular level and a desire to understand the self-assembly at...Dissertation directed by: Professor Wolfgang Losert Department of Physics In this thesis , we describe results from investigations of the self-assembly of
Remote control system for high-perfomance computer simulation of crystal growth by the PFC method
NASA Astrophysics Data System (ADS)
Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei
2017-04-01
Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.
Finite element analysis simulations for ultrasonic array NDE inspections
NASA Astrophysics Data System (ADS)
Dobson, Jeff; Tweedie, Andrew; Harvey, Gerald; O'Leary, Richard; Mulholland, Anthony; Tant, Katherine; Gachagan, Anthony
2016-02-01
Advances in manufacturing techniques and materials have led to an increase in the demand for reliable and robust inspection techniques to maintain safety critical features. The application of modelling methods to develop and evaluate inspections is becoming an essential tool for the NDE community. Current analytical methods are inadequate for simulation of arbitrary components and heterogeneous materials, such as anisotropic welds or composite structures. Finite element analysis software (FEA), such as PZFlex, can provide the ability to simulate the inspection of these arrangements, providing the ability to economically prototype and evaluate improved NDE methods. FEA is often seen as computationally expensive for ultrasound problems however, advances in computing power have made it a more viable tool. This paper aims to illustrate the capability of appropriate FEA to produce accurate simulations of ultrasonic array inspections - minimizing the requirement for expensive test-piece fabrication. Validation is afforded via corroboration of the FE derived and experimentally generated data sets for a test-block comprising 1D and 2D defects. The modelling approach is extended to consider the more troublesome aspects of heterogeneous materials where defect dimensions can be of the same length scale as the grain structure. The model is used to facilitate the implementation of new ultrasonic array inspection methods for such materials. This is exemplified by considering the simulation of ultrasonic NDE in a weld structure in order to assess new approaches to imaging such structures.
REVEAL: An Extensible Reduced Order Model Builder for Simulation and Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Khushbu; Sharma, Poorva; Ma, Jinliang
2013-04-30
Many science domains need to build computationally efficient and accurate representations of high fidelity, computationally expensive simulations. These computationally efficient versions are known as reduced-order models. This paper presents the design and implementation of a novel reduced-order model (ROM) builder, the REVEAL toolset. This toolset generates ROMs based on science- and engineering-domain specific simulations executed on high performance computing (HPC) platforms. The toolset encompasses a range of sampling and regression methods that can be used to generate a ROM, automatically quantifies the ROM accuracy, and provides support for an iterative approach to improve ROM accuracy. REVEAL is designed to bemore » extensible in order to utilize the core functionality with any simulator that has published input and output formats. It also defines programmatic interfaces to include new sampling and regression techniques so that users can ‘mix and match’ mathematical techniques to best suit the characteristics of their model. In this paper, we describe the architecture of REVEAL and demonstrate its usage with a computational fluid dynamics model used in carbon capture.« less
Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N
2017-12-01
Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.
Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines
NASA Astrophysics Data System (ADS)
Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.
2016-12-01
Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn
2015-03-28
The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much lessmore » computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.« less
Simulating Human Cognition in the Domain of Air Traffic Control
NASA Technical Reports Server (NTRS)
Freed, Michael; Johnston, James C.; Null, Cynthia H. (Technical Monitor)
1995-01-01
Experiments intended to assess performance in human-machine interactions are often prohibitively expensive, unethical or otherwise impractical to run. Approximations of experimental results can be obtained, in principle, by simulating the behavior of subjects using computer models of human mental behavior. Computer simulation technology has been developed for this purpose. Our goal is to produce a cognitive model suitable to guide the simulation machinery and enable it to closely approximate a human subject's performance in experimental conditions. The described model is designed to simulate a variety of cognitive behaviors involved in routine air traffic control. As the model is elaborated, our ability to predict the effects of novel circumstances on controller error rates and other performance characteristics should increase. This will enable the system to project the impact of proposed changes to air traffic control procedures and equipment on controller performance.
NASA Astrophysics Data System (ADS)
Parker, Jeffrey; Lodestro, Lynda; Told, Daniel; Merlo, Gabriele; Ricketson, Lee; Campos, Alejandro; Jenko, Frank; Hittinger, Jeffrey
2017-10-01
Predictive whole-device simulation models will play an increasingly important role in ensuring the success of fusion experiments and accelerating the development of fusion energy. In the core of tokamak plasmas, a separation of timescales between turbulence and transport makes a single direct simulation of both processes computationally expensive. We present the first demonstration of a multiple-timescale method coupling global gyrokinetic simulations with a transport solver to calculate the self-consistent, steady-state temperature profile. Initial results are highly encouraging, with the coupling method appearing robust to the difficult problem of turbulent fluctuations. The method holds potential for integrating first-principles turbulence simulations into whole-device models and advancing the understanding of global plasma behavior. Work supported by US DOE under Contract DE-AC52-07NA27344 and the Exascale Computing Project (17-SC-20-SC).
Evaluation of a grid based molecular dynamics approach for polypeptide simulations.
Merelli, Ivan; Morra, Giulia; Milanesi, Luciano
2007-09-01
Molecular dynamics is very important for biomedical research because it makes possible simulation of the behavior of a biological macromolecule in silico. However, molecular dynamics is computationally rather expensive: the simulation of some nanoseconds of dynamics for a large macromolecule such as a protein takes very long time, due to the high number of operations that are needed for solving the Newton's equations in the case of a system of thousands of atoms. In order to obtain biologically significant data, it is desirable to use high-performance computation resources to perform these simulations. Recently, a distributed computing approach based on replacing a single long simulation with many independent short trajectories has been introduced, which in many cases provides valuable results. This study concerns the development of an infrastructure to run molecular dynamics simulations on a grid platform in a distributed way. The implemented software allows the parallel submission of different simulations that are singularly short but together bring important biological information. Moreover, each simulation is divided into a chain of jobs to avoid data loss in case of system failure and to contain the dimension of each data transfer from the grid. The results confirm that the distributed approach on grid computing is particularly suitable for molecular dynamics simulations thanks to the elevated scalability.
Alimohammadi, Mona; Sherwood, Joseph M; Karimpour, Morad; Agu, Obiekezie; Balabani, Stavroula; Díaz-Zuccarini, Vanessa
2015-04-15
The management and prognosis of aortic dissection (AD) is often challenging and the use of personalised computational models is being explored as a tool to improve clinical outcome. Including vessel wall motion in such simulations can provide more realistic and potentially accurate results, but requires significant additional computational resources, as well as expertise. With clinical translation as the final aim, trade-offs between complexity, speed and accuracy are inevitable. The present study explores whether modelling wall motion is worth the additional expense in the case of AD, by carrying out fluid-structure interaction (FSI) simulations based on a sample patient case. Patient-specific anatomical details were extracted from computed tomography images to provide the fluid domain, from which the vessel wall was extrapolated. Two-way fluid-structure interaction simulations were performed, with coupled Windkessel boundary conditions and hyperelastic wall properties. The blood was modelled using the Carreau-Yasuda viscosity model and turbulence was accounted for via a shear stress transport model. A simulation without wall motion (rigid wall) was carried out for comparison purposes. The displacement of the vessel wall was comparable to reports from imaging studies in terms of intimal flap motion and contraction of the true lumen. Analysis of the haemodynamics around the proximal and distal false lumen in the FSI model showed complex flow structures caused by the expansion and contraction of the vessel wall. These flow patterns led to significantly different predictions of wall shear stress, particularly its oscillatory component, which were not captured by the rigid wall model. Through comparison with imaging data, the results of the present study indicate that the fluid-structure interaction methodology employed herein is appropriate for simulations of aortic dissection. Regions of high wall shear stress were not significantly altered by the wall motion, however, certain collocated regions of low and oscillatory wall shear stress which may be critical for disease progression were only identified in the FSI simulation. We conclude that, if patient-tailored simulations of aortic dissection are to be used as an interventional planning tool, then the additional complexity, expertise and computational expense required to model wall motion is indeed justified.
Advanced Computational Methods for Thermal Radiative Heat Transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.
2016-10-01
Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weaponmore » resp onse in fire environments.« less
NASA Astrophysics Data System (ADS)
Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi
2016-10-01
Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.
Parameterized reduced-order models using hyper-dual numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less
Fast perceptual image hash based on cascade algorithm
NASA Astrophysics Data System (ADS)
Ruchay, Alexey; Kober, Vitaly; Yavtushenko, Evgeniya
2017-09-01
In this paper, we propose a perceptual image hash algorithm based on cascade algorithm, which can be applied in image authentication, retrieval, and indexing. Image perceptual hash uses for image retrieval in sense of human perception against distortions caused by compression, noise, common signal processing and geometrical modifications. The main disadvantage of perceptual hash is high time expenses. In the proposed cascade algorithm of image retrieval initializes with short hashes, and then a full hash is applied to the processed results. Computer simulation results show that the proposed hash algorithm yields a good performance in terms of robustness, discriminability, and time expenses.
Adaptive Mesh Refinement for Microelectronic Device Design
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John; Norton, Charles
1999-01-01
Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of the spectrum as well as active device simulations that model charge transport and Maxwell's equations will be presented.
NASA Astrophysics Data System (ADS)
Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang
2010-11-01
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU
Xia, Yong; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.
Xia, Yong; Wang, Kuanquan; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.
Reduced order models for assessing CO 2 impacts in shallow unconfined aquifers
Keating, Elizabeth H.; Harp, Dylan H.; Dai, Zhenxue; ...
2016-01-28
Risk assessment studies of potential CO 2 sequestration projects consider many factors, including the possibility of brine and/or CO 2 leakage from the storage reservoir. Detailed multiphase reactive transport simulations have been developed to predict the impact of such leaks on shallow groundwater quality; however, these simulations are computationally expensive and thus difficult to directly embed in a probabilistic risk assessment analysis. Here we present a process for developing computationally fast reduced-order models which emulate key features of the more detailed reactive transport simulations. A large ensemble of simulations that take into account uncertainty in aquifer characteristics and CO 2/brinemore » leakage scenarios were performed. Twelve simulation outputs of interest were used to develop response surfaces (RSs) using a MARS (multivariate adaptive regression splines) algorithm (Milborrow, 2015). A key part of this study is to compare different measures of ROM accuracy. We then show that for some computed outputs, MARS performs very well in matching the simulation data. The capability of the RS to predict simulation outputs for parameter combinations not used in RS development was tested using cross-validation. Again, for some outputs, these results were quite good. For other outputs, however, the method performs relatively poorly. Performance was best for predicting the volume of depressed-pH-plumes, and was relatively poor for predicting organic and trace metal plume volumes. We believe several factors, including the non-linearity of the problem, complexity of the geochemistry, and granularity in the simulation results, contribute to this varied performance. The reduced order models were developed principally to be used in probabilistic performance analysis where a large range of scenarios are considered and ensemble performance is calculated. We demonstrate that they effectively predict the ensemble behavior. But, the performance of the RSs is much less accurate when used to predict time-varying outputs from a single simulation. If an analysis requires only a small number of scenarios to be investigated, computationally expensive physics-based simulations would likely provide more reliable results. Finally, if the aggregate behavior of a large number of realizations is the focus, as will be the case in probabilistic quantitative risk assessment, the methodology presented here is relatively robust.« less
Crashworthiness: Planes, trains, and automobiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Logan, R.W.; Tokarz, F.J.; Whirley, R.G.
A powerful DYNA3D computer code simulates the dynamic effects of stress traveling through structures. It is the most advanced modeling tool available to study crashworthiness problems and to analyze impacts. Now used by some 1000 companies, government research laboratories, and universities in the U.S. and abroad, DYNA3D is also a preeminent example of successful technology transfer. The initial interest in such a code was to simulate the structural response of weapons systems. The need was to model not the explosive or nuclear events themselves but rather the impacts of weapons systems with the ground, tracking the stress waves as theymore » move through the object. This type of computer simulation augmented or, in certain cases, reduced the need for expensive and time-consuming crash testing.« less
NASA Astrophysics Data System (ADS)
Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin
2018-01-01
Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000 × . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.
Parallelized direct execution simulation of message-passing parallel programs
NASA Technical Reports Server (NTRS)
Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.
1994-01-01
As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.
Stereoscopic, Force-Feedback Trainer For Telerobot Operators
NASA Technical Reports Server (NTRS)
Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.
1994-01-01
Computer-controlled simulator for training technicians to operate remote robots provides both visual and kinesthetic virtual reality. Used during initial stage of training; saves time and expense, increases operational safety, and prevents damage to robots by inexperienced operators. Computes virtual contact forces and torques of compliant robot in real time, providing operator with feel of forces experienced by manipulator as well as view in any of three modes: single view, two split views, or stereoscopic view. From keyboard, user specifies force-reflection gain and stiffness of manipulator hand for three translational and three rotational axes. System offers two simulated telerobotic tasks: insertion of peg in hole in three dimensions, and removal and insertion of drawer.
Simulating electric field interactions with polar molecules using spectroscopic databases
NASA Astrophysics Data System (ADS)
Owens, Alec; Zak, Emil J.; Chubb, Katy L.; Yurchenko, Sergei N.; Tennyson, Jonathan; Yachmenev, Andrey
2017-03-01
Ro-vibrational Stark-associated phenomena of small polyatomic molecules are modelled using extensive spectroscopic data generated as part of the ExoMol project. The external field Hamiltonian is built from the computed ro-vibrational line list of the molecule in question. The Hamiltonian we propose is general and suitable for any polar molecule in the presence of an electric field. By exploiting precomputed data, the often prohibitively expensive computations associated with high accuracy simulations of molecule-field interactions are avoided. Applications to strong terahertz field-induced ro-vibrational dynamics of PH3 and NH3, and spontaneous emission data for optoelectrical Sisyphus cooling of H2CO and CH3Cl are discussed.
Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2017-01-01
Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.
NASA Technical Reports Server (NTRS)
2008-01-01
NASA s advanced visual simulations are essential for analyses associated with life cycle planning, design, training, testing, operations, and evaluation. Kennedy Space Center, in particular, uses simulations for ground services and space exploration planning in an effort to reduce risk and costs while improving safety and performance. However, it has been difficult to circulate and share the results of simulation tools among the field centers, and distance and travel expenses have made timely collaboration even harder. In response, NASA joined with Valador Inc. to develop the Distributed Observer Network (DON), a collaborative environment that leverages game technology to bring 3-D simulations to conventional desktop and laptop computers. DON enables teams of engineers working on design and operations to view and collaborate on 3-D representations of data generated by authoritative tools. DON takes models and telemetry from these sources and, using commercial game engine technology, displays the simulation results in a 3-D visual environment. Multiple widely dispersed users, working individually or in groups, can view and analyze simulation results on desktop and laptop computers in real time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaurov, Alexander A., E-mail: kaurov@uchicago.edu
The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less
Human dynamic orientation model applied to motion simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Borah, J. D.
1976-01-01
The Ormsby model of dynamic orientation, in the form of a discrete time computer program was used to predict non-visually induced sensations during an idealized coordinated aircraft turn. To predict simulation fidelity, the Ormsby model was used to assign penalties for incorrect attitude and angular rate perceptions. It was determined that a three rotational degree of freedom simulation should remain faithful to attitude perception even at the expense of incorrect angular rate sensations. Implementing this strategy, a simulation profile for the idealized turn was designed for a Link GAT-1 trainer. A simple optokinetic display was added to improve the fidelity of roll rate sensations.
FUX-Sim: Implementation of a fast universal simulation/reconstruction framework for X-ray systems.
Abella, Monica; Serrano, Estefania; Garcia-Blas, Javier; García, Ines; de Molina, Claudia; Carretero, Jesus; Desco, Manuel
2017-01-01
The availability of digital X-ray detectors, together with advances in reconstruction algorithms, creates an opportunity for bringing 3D capabilities to conventional radiology systems. The downside is that reconstruction algorithms for non-standard acquisition protocols are generally based on iterative approaches that involve a high computational burden. The development of new flexible X-ray systems could benefit from computer simulations, which may enable performance to be checked before expensive real systems are implemented. The development of simulation/reconstruction algorithms in this context poses three main difficulties. First, the algorithms deal with large data volumes and are computationally expensive, thus leading to the need for hardware and software optimizations. Second, these optimizations are limited by the high flexibility required to explore new scanning geometries, including fully configurable positioning of source and detector elements. And third, the evolution of the various hardware setups increases the effort required for maintaining and adapting the implementations to current and future programming models. Previous works lack support for completely flexible geometries and/or compatibility with multiple programming models and platforms. In this paper, we present FUX-Sim, a novel X-ray simulation/reconstruction framework that was designed to be flexible and fast. Optimized implementation for different families of GPUs (CUDA and OpenCL) and multi-core CPUs was achieved thanks to a modularized approach based on a layered architecture and parallel implementation of the algorithms for both architectures. A detailed performance evaluation demonstrates that for different system configurations and hardware platforms, FUX-Sim maximizes performance with the CUDA programming model (5 times faster than other state-of-the-art implementations). Furthermore, the CPU and OpenCL programming models allow FUX-Sim to be executed over a wide range of hardware platforms.
Monte Carlo errors with less errors
NASA Astrophysics Data System (ADS)
Wolff, Ulli; Alpha Collaboration
2004-01-01
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
NASA Technical Reports Server (NTRS)
Fowell, Richard A.
1989-01-01
Most simulation plots are heavily oversampled. Ignoring unnecessary data points dramatically reduces plot time with imperceptible effect on quality. The technique is suited to most plot devices. The departments laser printer's speed was tripled for large simulation plots by data thinning. This reduced printer delays without the expense of a faster laser printer. Surpisingly, it saved computer time as well. All plot data are now thinned, including PostScript and terminal plots. The problem, solution, and conclusions are described. The thinning algorithm is described and performance studies are presented. To obtain FORTRAN 77 or C source listings, mail a SASE to the author.
Atomistic Modeling of Pd Site Preference in NiTi
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Noebe, Ronald D.; Mosca, Hugo O.
2004-01-01
An analysis of the site subsitution behavior of Pd in NiTi was performed using the BFS method for alloys. Through a combination of Monte Carlo simulations and detailed atom-by-atom energetic analyses of various computational cells, representing compositions of NiTi with up to 10 at% Pd, a detailed understanding of site occupancy of Pd in NiTi was revealed. Pd subsituted at the expense of Ni in a NiTi alloy will prefer the Ni-sites. Pd subsituted at the expense of Ti shows a very weak preference for Ti-sites that diminishes as the amount of Pd in the alloy increases and as the temperature increases.
On Using Surrogates with Genetic Programming.
Hildebrandt, Torsten; Branke, Jürgen
2015-01-01
One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.
Condor-COPASI: high-throughput computing for biochemical networks
2012-01-01
Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments
Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H.A.; Hlavacek, William S.; Posner, Richard G.
2016-01-01
Summary: Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. Availability and implementation: BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary information: Supplementary data are available at Bioinformatics online. Contact: bionetgen.help@gmail.com PMID:26556387
A simple parameterization of aerosol emissions in RAMS
NASA Astrophysics Data System (ADS)
Letcher, Theodore
Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical model. Furthermore, SA formation is greatly reduced during the winter months due to the lack of naturally produced organic VOC's. Because of these reasons, it was felt that neglecting SOA within the model was the best course of action. The actual parameterization uses a prescribed source map to add aerosol to the model at two vertical levels that surround an arbitrary height decided by the user. To best represent the real-world, the WRF Chemistry model was run using the National Emissions Inventory (NEI2005) to represent anthropogenic emissions and the Model Emissions of Gases and Aerosols from Nature (MEGAN) to represent natural contributions to aerosol. WRF Chemistry was run for one hour, after which the aerosol output along with the hygroscopicity parameter (κ) were saved into a data file that had the capacity to be interpolated to an arbitrary grid used in RAMS. The comparison of this parameterization to observations collected at Mesa Verde National Park (MVNP) during the Inhibition of Snowfall from Pollution Aerosol (ISPA-III) field campaign yielded promising results. The model was able to simulate the variability in near surface aerosol concentration with reasonable accuracy, though with a general low bias. Furthermore, this model compared much better to the observations than did the WRF Chemistry model using a fraction of the computational expense. This emissions scheme was able to show reasonable solutions regarding the aerosol concentrations and can therefore be used to provide an estimate of the seasonal impact of increased CCN on water resources in Western Colorado with relatively low computational expense.
Benchmark tests for a Formula SAE Student car prototyping
NASA Astrophysics Data System (ADS)
Mariasiu, Florin
2011-12-01
Aerodynamic characteristics of a vehicle are important elements in its design and construction. A low drag coefficient brings significant fuel savings and increased engine power efficiency. In designing and developing vehicles trough computer simulation process to determine the vehicles aerodynamic characteristics are using dedicated CFD (Computer Fluid Dynamics) software packages. However, the results obtained by this faster and cheaper method, are validated by experiments in wind tunnels tests, which are expensive and were complex testing equipment are used in relatively high costs. Therefore, the emergence and development of new low-cost testing methods to validate CFD simulation results would bring great economic benefits for auto vehicles prototyping process. This paper presents the initial development process of a Formula SAE Student race-car prototype using CFD simulation and also present a measurement system based on low-cost sensors through which CFD simulation results were experimentally validated. CFD software package used for simulation was Solid Works with the FloXpress add-on and experimental measurement system was built using four piezoresistive force sensors FlexiForce type.
Application of multi-grid method on the simulation of incremental forging processes
NASA Astrophysics Data System (ADS)
Ramadan, Mohamad; Khaled, Mahmoud; Fourment, Lionel
2016-10-01
Numerical simulation becomes essential in manufacturing large part by incremental forging processes. It is a splendid tool allowing to show physical phenomena however behind the scenes, an expensive bill should be paid, that is the computational time. That is why many techniques are developed to decrease the computational time of numerical simulation. Multi-Grid method is a numerical procedure that permits to reduce computational time of numerical calculation by performing the resolution of the system of equations on several mesh of decreasing size which allows to smooth faster the low frequency of the solution as well as its high frequency. In this paper a Multi-Grid method is applied to cogging process in the software Forge 3. The study is carried out using increasing number of degrees of freedom. The results shows that calculation time is divide by two for a mesh of 39,000 nodes. The method is promising especially if coupled with Multi-Mesh method.
The ReaxFF reactive force-field: Development, applications, and future directions
Senftle, Thomas; Hong, Sungwook; Islam, Md Mahbubul; ...
2016-03-04
The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFFmore » method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. As a result, this article provides an overview of the development, application, and future directions of the ReaxFF method.« less
A computational approach for coupled 1D and 2D/3D CFD modelling of pulse Tube cryocoolers
NASA Astrophysics Data System (ADS)
Fang, T.; Spoor, P. S.; Ghiaasiaan, S. M.
2017-12-01
The physics behind Stirling-type cryocoolers are complicated. One dimensional (1D) simulation tools offer limited details and accuracy, in particular for cryocoolers that have non-linear configurations. Multi-dimensional Computational Fluid Dynamic (CFD) methods are useful but are computationally expensive in simulating cyrocooler systems in their entirety. In view of the fact that some components of a cryocooler, e.g., inertance tubes and compliance tanks, can be modelled as 1D components with little loss of critical information, a 1D-2D/3D coupled model was developed. Accordingly, one-dimensional - like components are represented by specifically developed routines. These routines can be coupled to CFD codes and provide boundary conditions for 2D/3D CFD simulations. The developed coupled model, while preserving sufficient flow field details, is two orders of magnitude faster than equivalent 2D/3D CFD models. The predictions show good agreement with experimental data and 2D/3D CFD model.
Loeffler, Johannes R; Ehmki, Emanuel S R; Fuchs, Julian E; Liedl, Klaus R
2016-05-01
Urea derivatives are ubiquitously found in many chemical disciplines. N,N'-substituted ureas may show different conformational preferences depending on their substitution pattern. The high energetic barrier for isomerization of the cis and trans state poses additional challenges on computational simulation techniques aiming at a reproduction of the biological properties of urea derivatives. Herein, we investigate energetics of urea conformations and their interconversion using a broad spectrum of methodologies ranging from data mining, via quantum chemistry to molecular dynamics simulation and free energy calculations. We find that the inversion of urea conformations is inherently slow and beyond the time scale of typical simulation protocols. Therefore, extra care needs to be taken by computational chemists to work with appropriate model systems. We find that both knowledge-driven approaches as well as physics-based methods may guide molecular modelers towards accurate starting structures for expensive calculations to ensure that conformations of urea derivatives are modeled as adequately as possible.
Human Machine Interfaces for Teleoperators and Virtual Environments Conference
NASA Technical Reports Server (NTRS)
1990-01-01
In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.
Statistical Emulator for Expensive Classification Simulators
NASA Technical Reports Server (NTRS)
Ross, Jerret; Samareh, Jamshid A.
2016-01-01
Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.
NASA Astrophysics Data System (ADS)
Caciuffo, Roberto; Esposti, Alessandra Degli; Deleuze, Michael S.; Leigh, David A.; Murphy, Aden; Paci, Barbara; Parker, Stewart F.; Zerbetto, Francesco
1998-12-01
The inelastic neutron scattering (INS) spectrum of the original benzylic amide [2]catenane is recorded and simulated by a semiempirical quantum chemical procedure coupled with the most comprehensive approach available to date, the CLIMAX program. The successful simulation of the spectrum indicates that the modified neglect of differential overlap (MNDO) model can reproduce the intramolecular vibrations of a molecular system as large as a catenane (136 atoms). Because of the computational costs involved and some numerical instabilities, a less expensive approach is attempted which involves the molecular mechanics-based calculation of the INS response in terms of the most basic formulation for the scattering activity. The encouraging results obtained validate the less computationally intensive procedure and allow its extension to the calculation of the INS spectrum for a second, theoretical, co-conformer, which, although structurally and energetically reasonable, is not, in fact, found in the solid state. The second structure was produced by a Monte Carlo simulated annealing method run in the conformational space (a procedure that would have been prohibitively expensive at the semiempirical level) and is characterized by a higher degree of intramolecular hydrogen bonding than the x-ray structure. The two alternative structures yield different simulated spectra, only one of which, the authentic one, is compatible with the experimental data. Comparison of the two simulated and experimental spectra affords the identification of an inelastic neutron scattering spectral signature of the correct hydrogen bonding motif in the region slightly above 700 cm-1. The study illustrates that combinations of simulated INS data and experimental results can be successfully used to discriminate between different proposed structures or possible hydrogen bonding motifs in large functional molecular systems.
NASA Astrophysics Data System (ADS)
Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.
2014-08-01
The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.
[Cost analysis for navigation in knee endoprosthetics].
Cerha, O; Kirschner, S; Günther, K-P; Lützner, J
2009-12-01
Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to
Computationally Efficient Multiconfigurational Reactive Molecular Dynamics
Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.
2012-01-01
It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924
Chang, Ching-I; Yan, Huey-Yeu; Sung, Wen-Hsu; Shen, Shu-Cheng; Chuang, Pao-Yu
2006-01-01
The purpose of this research was to develop a computer-aided instruction system for intra-aortic balloon pumping (IABP) skills in clinical nursing with virtual instrument (VI) concepts. Computer graphic technologies were incorporated to provide not only static clinical nursing education, but also the simulated function of operating an expensive medical instrument with VI techniques. The content of nursing knowledge was adapted from current well-accepted clinical training materials. The VI functions were developed using computer graphic technology with photos of real medical instruments taken by digital camera. We wish the system could provide beginners of nursing education important teaching assistance.
A Zonal Approach for Prediction of Jet Noise
NASA Technical Reports Server (NTRS)
Shih, S. H.; Hixon, D. R.; Mankbadi, Reda R.
1995-01-01
A zonal approach for direct computation of sound generation and propagation from a supersonic jet is investigated. The present work splits the computational domain into a nonlinear, acoustic-source regime and a linear acoustic wave propagation regime. In the nonlinear regime, the unsteady flow is governed by the large-scale equations, which are the filtered compressible Navier-Stokes equations. In the linear acoustic regime, the sound wave propagation is described by the linearized Euler equations. Computational results are presented for a supersonic jet at M = 2. 1. It is demonstrated that no spurious modes are generated in the matching region and the computational expense is reduced substantially as opposed to fully large-scale simulation.
Martin, Rob; Rojas, David; Cheung, Jeffrey J H; Weber, Bryce; Kapralos, Bill; Dubrowski, Adam
2013-01-01
Simulation-augmented education and training (SAET) is an expensive educational tool that may be facilitated through social networking technologies or Computer Supported Collaborative Learning (CSCL). This study examined the perceptions of medical undergraduates participating in SAET for knot tying skills to identify perceptions and barriers to implementation of social networking technologies within a broader medical education curriculum. The majority of participants (89%) found CSCL aided their learning of the technical skill and identified privacy and accessibility as major barriers to the tools implementation.
Virtual worlds and team training.
Dev, Parvati; Youngblood, Patricia; Heinrichs, W Leroy; Kusumoto, Laura
2007-06-01
An important component of all emergency medicine residency programs is managing trauma effectively as a member of an emergency medicine team, but practice on live patients is often impractical and mannequin-based simulators are expensive and require all trainees to be physically present at the same location. This article describes a project to develop and evaluate a computer-based simulator (the Virtual Emergency Department) for distance training in teamwork and leadership in trauma management. The virtual environment provides repeated practice opportunities with life-threatening trauma cases in a safe and reproducible setting.
AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, D.; Alfonsi, A.; Talbot, P.
2016-10-01
The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less
Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik
2015-06-09
Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
48 CFR 9904.410-60 - Illustrations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...
48 CFR 9904.410-60 - Illustrations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...
Evolutionary Development of the Simulation by Logical Modeling System (SIBYL)
NASA Technical Reports Server (NTRS)
Wu, Helen
1995-01-01
Through the evolutionary development of the Simulation by Logical Modeling System (SIBYL) we have re-engineered the expensive and complex IBM mainframe based Long-term Hardware Projection Model (LHPM) to a robust cost-effective computer based mode that is easy to use. We achieved significant cost reductions and improved productivity in preparing long-term forecasts of Space Shuttle Main Engine (SSME) hardware. The LHPM for the SSME is a stochastic simulation model that projects the hardware requirements over 10 years. SIBYL is now the primary modeling tool for developing SSME logistics proposals and Program Operating Plan (POP) for NASA and divisional marketing studies.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments.
Thomas, Brandon R; Chylek, Lily A; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H A; Hlavacek, William S; Posner, Richard G
2016-03-01
Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary data are available at Bioinformatics online. bionetgen.help@gmail.com. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Distributed computing for membrane-based modeling of action potential propagation.
Porras, D; Rogers, J M; Smith, W M; Pollard, A E
2000-08-01
Action potential propagation simulations with physiologic membrane currents and macroscopic tissue dimensions are computationally expensive. We, therefore, analyzed distributed computing schemes to reduce execution time in workstation clusters by parallelizing solutions with message passing. Four schemes were considered in two-dimensional monodomain simulations with the Beeler-Reuter membrane equations. Parallel speedups measured with each scheme were compared to theoretical speedups, recognizing the relationship between speedup and code portions that executed serially. A data decomposition scheme based on total ionic current provided the best performance. Analysis of communication latencies in that scheme led to a load-balancing algorithm in which measured speedups at 89 +/- 2% and 75 +/- 8% of theoretical speedups were achieved in homogeneous and heterogeneous clusters of workstations. Speedups in this scheme with the Luo-Rudy dynamic membrane equations exceeded 3.0 with eight distributed workstations. Cluster speedups were comparable to those measured during parallel execution on a shared memory machine.
Modeling the Hydration Layer around Proteins: Applications to Small- and Wide-Angle X-Ray Scattering
Virtanen, Jouko Juhani; Makowski, Lee; Sosnick, Tobin R.; Freed, Karl F.
2011-01-01
Small-/wide-angle x-ray scattering (SWAXS) experiments can aid in determining the structures of proteins and protein complexes, but success requires accurate computational treatment of solvation. We compare two methods by which to calculate SWAXS patterns. The first approach uses all-atom explicit-solvent molecular dynamics (MD) simulations. The second, far less computationally expensive method involves prediction of the hydration density around a protein using our new HyPred solvation model, which is applied without the need for additional MD simulations. The SWAXS patterns obtained from the HyPred model compare well to both experimental data and the patterns predicted by the MD simulations. Both approaches exhibit advantages over existing methods for analyzing SWAXS data. The close correspondence between calculated and observed SWAXS patterns provides strong experimental support for the description of hydration implicit in the HyPred model. PMID:22004761
Three-dimensional simulation of helix traveling-wave tube cold-test characteristics using MAFIA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kory, C.L.
1996-12-31
A critically important step in the traveling-wave tube (TWT) design process is the cold-testing of the slow-wave circuit for dispersion, beam interaction impedance and RF losses. Experimental cold-tests can be very time-consuming and expensive, thus limiting the freedom to examine numerous variations to the test circuit. This makes the need for computational methods crucial as they can lower cost, reduce tube development time and allow the freedom to introduce novel and improved designs. The cold-test parameters have been calculated for a C-Band Northrop-Grumman helix TWT slow-wave circuit using MAFIA, the three-dimensional electromagnetic finite-integration computer code. Measured and simulated cold-test datamore » for the Northrop-Grumman helix TWT including dispersion, impedance and attenuation will be presented. Close agreement between simulated and measured values of the dispersion, impedance and attenuation has been obtained.« less
Realistic natural atmospheric phenomena and weather effects for interactive virtual environments
NASA Astrophysics Data System (ADS)
McLoughlin, Leigh
Clouds and the weather are important aspects of any natural outdoor scene, but existing dynamic techniques within computer graphics only offer the simplest of cloud representations. The problem that this work looks to address is how to provide a means of simulating clouds and weather features such as precipitation, that are suitable for virtual environments. Techniques for cloud simulation are available within the area of meteorology, but numerical weather prediction systems are computationally expensive, give more numerical accuracy than we require for graphics and are restricted to the laws of physics. Within computer graphics, we often need to direct and adjust physical features or to bend reality to meet artistic goals, which is a key difference between the subjects of computer graphics and physical science. Pure physically-based simulations, however, evolve their solutions according to pre-set rules and are notoriously difficult to control. The challenge then is for the solution to be computationally lightweight and able to be directed in some measure while at the same time producing believable results. This work presents a lightweight physically-based cloud simulation scheme that simulates the dynamic properties of cloud formation and weather effects. The system simulates water vapour, cloud water, cloud ice, rain, snow and hail. The water model incorporates control parameters and the cloud model uses an arbitrary vertical temperature profile, with a tool described to allow the user to define this. The result of this work is that clouds can now be simulated in near real-time complete with precipitation. The temperature profile and tool then provide a means of directing the resulting formation..
Improving finite element results in modeling heart valve mechanics.
Earl, Emily; Mohammadi, Hadi
2018-06-01
Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.
NASA Astrophysics Data System (ADS)
Kim, Jeonglae; Pope, Stephen B.
2014-05-01
A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.
Effect of Turbulence Modeling on an Excited Jet
NASA Technical Reports Server (NTRS)
Brown, Clifford A.; Hixon, Ray
2010-01-01
The flow dynamics in a high-speed jet are dominated by unsteady turbulent flow structures in the plume. Jet excitation seeks to control these flow structures through the natural instabilities present in the initial shear layer of the jet. Understanding and optimizing the excitation input, for jet noise reduction or plume mixing enhancement, requires many trials that may be done experimentally or computationally at a significant cost savings. Numerical simulations, which model various parts of the unsteady dynamics to reduce the computational expense of the simulation, must adequately capture the unsteady flow dynamics in the excited jet for the results are to be used. Four CFD methods are considered for use in an excited jet problem, including two turbulence models with an Unsteady Reynolds Averaged Navier-Stokes (URANS) solver, one Large Eddy Simulation (LES) solver, and one URANS/LES hybrid method. Each method is used to simulate a simplified excited jet and the results are evaluated based on the flow data, computation time, and numerical stability. The knowledge gained about the effect of turbulence modeling and CFD methods from these basic simulations will guide and assist future three-dimensional (3-D) simulations that will be used to understand and optimize a realistic excited jet for a particular application.
Symplectic multi-particle tracking on GPUs
NASA Astrophysics Data System (ADS)
Liu, Zhicong; Qiang, Ji
2018-05-01
A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.
COSMOABC: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Ishida, E. E. O.; Vitenti, S. D. P.; Penna-Lima, M.; Cisewski, J.; de Souza, R. S.; Trindade, A. M. M.; Cameron, E.; Busti, V. C.; COIN Collaboration
2015-11-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present COSMOABC, a Python ABC sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled COSMOABC with the NUMCOSMO library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. COSMOABC is published under the GPLv3 license on PyPI and GitHub and documentation is available at http://goo.gl/SmB8EX.
Efficient Strategies for Estimating the Spatial Coherence of Backscatter
Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.
2017-01-01
The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342
Automation of a Wave-Optics Simulation and Image Post-Processing Package on Riptide
NASA Astrophysics Data System (ADS)
Werth, M.; Lucas, J.; Thompson, D.; Abercrombie, M.; Holmes, R.; Roggemann, M.
Detailed wave-optics simulations and image post-processing algorithms are computationally expensive and benefit from the massively parallel hardware available at supercomputing facilities. We created an automated system that interfaces with the Maui High Performance Computing Center (MHPCC) Distributed MATLAB® Portal interface to submit massively parallel waveoptics simulations to the IBM iDataPlex (Riptide) supercomputer. This system subsequently postprocesses the output images with an improved version of physically constrained iterative deconvolution (PCID) and analyzes the results using a series of modular algorithms written in Python. With this architecture, a single person can simulate thousands of unique scenarios and produce analyzed, archived, and briefing-compatible output products with very little effort. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
Sub-grid drag model for immersed vertical cylinders in fluidized beds
Verma, Vikrant; Li, Tingwen; Dietiker, Jean -Francois; ...
2017-01-03
Immersed vertical cylinders are often used as heat exchanger in gas-solid fluidized beds. Computational Fluid Dynamics (CFD) simulations are computationally expensive for large scale systems with bundles of cylinders. Therefore sub-grid models are required to facilitate simulations on a coarse grid, where internal cylinders are treated as a porous medium. The influence of cylinders on the gas-solid flow tends to enhance segregation and affect the gas-solid drag. A correction to gas-solid drag must be modeled using a suitable sub-grid constitutive relationship. In the past, Sarkar et al. have developed a sub-grid drag model for horizontal cylinder arrays based on 2Dmore » simulations. However, the effect of a vertical cylinder arrangement was not considered due to computational complexities. In this study, highly resolved 3D simulations with vertical cylinders were performed in small periodic domains. These simulations were filtered to construct a sub-grid drag model which can then be implemented in coarse-grid simulations. Gas-solid drag was filtered for different solids fractions and a significant reduction in drag was identified when compared with simulation without cylinders and simulation with horizontal cylinders. Slip velocities significantly increase when vertical cylinders are present. Lastly, vertical suspension drag due to vertical cylinders is insignificant however substantial horizontal suspension drag is observed which is consistent to the finding for horizontal cylinders.« less
Computationally efficient optimization of radiation drives
NASA Astrophysics Data System (ADS)
Zimmerman, George; Swift, Damian
2017-06-01
For many applications of pulsed radiation, the temporal pulse shape is designed to induce a desired time-history of conditions. This optimization is normally performed using multi-physics simulations of the system, adjusting the shape until the desired response is induced. These simulations may be computationally intensive, and iterative forward optimization is then expensive and slow. In principle, a simulation program could be modified to adjust the radiation drive automatically until the desired instantaneous response is achieved, but this may be impracticable in a complicated multi-physics program. However, the computational time increment is typically much shorter than the time scale of changes in the desired response, so the radiation intensity can be adjusted so that the response tends toward the desired value. This relaxed in-situ optimization method can give an adequate design for a pulse shape in a single forward simulation, giving a typical gain in computational efficiency of tens to thousands. This approach was demonstrated for the design of laser pulse shapes to induce ramp loading to high pressure in target assemblies where different components had significantly different mechanical impedance, requiring careful pulse shaping. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Enabling Earth Science: The Facilities and People of the NCCS
NASA Technical Reports Server (NTRS)
2002-01-01
The NCCS's mass data storage system allows scientists to store and manage the vast amounts of data generated by these computations, and its high-speed network connections allow the data to be accessed quickly from the NCCS archives. Some NCCS users perform studies that are directly related to their ability to run computationally expensive and data-intensive simulations. Because the number and type of questions scientists research often are limited by computing power, the NCCS continually pursues the latest technologies in computing, mass storage, and networking technologies. Just as important as the processors, tapes, and routers of the NCCS are the personnel who administer this hardware, create and manage accounts, maintain security, and assist the scientists, often working one on one with them.
Cyclone Simulation via Action Minimization
NASA Astrophysics Data System (ADS)
Plotkin, D. A.; Weare, J.; Abbot, D. S.
2016-12-01
A postulated impact of climate change is an increase in intensity of tropical cyclones (TCs). This hypothesized effect results from the fact that TCs are powered subsaturated boundary layer air picking up water vapor from the surface ocean as it flows inwards towards the eye. This water vapor serves as the energy input for TCs, which can be idealized as heat engines. The inflowing air has a nearly identical temperature as the surface ocean; therefore, warming of the surface leads to a warmer atmospheric boundary layer. By the Clausius-Clapeyron relationship, warmer boundary layer air can hold more water vapor and thus results in more energetic storms. Changes in TC intensity are difficult to predict due to the presence of fine structures (e.g. convective structures and rainbands) with length scales of less than 1 km, while general circulation models (GCMs) generally have horizontal resolutions of tens of kilometers. The models are therefore unable to capture these features, which are critical to accurately simulating cyclone structure and intensity. Further, strong TCs are rare events, meaning that long multi-decadal simulations are necessary to generate meaningful statistics about intense TC activity. This adds to the computational expense, making it yet more difficult to generate accurate statistics about long-term changes in TC intensity due to global warming via direct simulation. We take an alternative approach, applying action minimization techniques developed in molecular dynamics to the WRF weather/climate model. We construct artificial model trajectories that lead from quiescent (TC-free) states to TC states, then minimize the deviation of these trajectories from true model dynamics. We can thus create Monte Carlo model ensembles that are biased towards cyclogenesis, which reduces computational expense by limiting time spent in non-TC states. This allows for: 1) selective interrogation of model states with TCs; 2) finding the likeliest paths for transitions between TC-free and TC states; and 3) an increase in horizontal resolution due to computational savings achieved by reducing time spent simulating TC-free states. This increase in resolution, coupled with a decrease in simulation time, allows for prediction of the change in TC frequency and intensity distributions resulting from climate change.
Ciobanu, O
2009-01-01
The objective of this study was to obtain three-dimensional (3D) images and to perform biomechanical simulations starting from DICOM images obtained by computed tomography (CT). Open source software were used to prepare digitized 2D images of tissue sections and to create 3D reconstruction from the segmented structures. Finally, 3D images were used in open source software in order to perform biomechanic simulations. This study demonstrates the applicability and feasibility of open source software developed in our days for the 3D reconstruction and biomechanic simulation. The use of open source software may improve the efficiency of investments in imaging technologies and in CAD/CAM technologies for implants and prosthesis fabrication which need expensive specialized software.
TTVFaster: First order eccentricity transit timing variations (TTVs)
NASA Astrophysics Data System (ADS)
Agol, Eric; Deck, Katherine
2016-04-01
TTVFaster implements analytic formulae for transit time variations (TTVs) that are accurate to first order in the planet-star mass ratios and in the orbital eccentricities; the implementations are available in several languages, including IDL, Julia, Python and C. These formulae compare well with more computationally expensive N-body integrations in the low-eccentricity, low mass-ratio regime when applied to simulated and to actual multi-transiting Kepler planet systems.
Unsteady Aerodynamic Modeling of A Maneuvering Aircraft Using Indicial Functions
2016-03-30
indicial functions are directly calculated using the results of unsteady Reynolds-averaged Navier - Stokes simulation and a grid-movement tool. Results are...but meanwhile, the full-order model based on Unsteady Reynolds-averaged Navier - Stokes (URANS) equation is too computationally expensive to be used...The flow solver used in this study solves the unsteady, three-dimensional and compressible Navier - Stokes equations. The equations in terms of
European Scientific Notes. Volume 37, Number 1.
1983-01-31
instantoneous sea-state condition can be tions vary widely in their realism , with computed from a special data base coded some producing dynamic color pictures...between the variables of accuracy, approach channels, the alignment of practicality, realism , and expense. jetties, and the establishment of Because the...tidal current variables The system certainly seems to be valid, have been played into some of the and the smooth dynamics, realism , and simulator runs
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...
2017-02-16
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
A Simplified Model for Multiphase Leakage through Faults with Applications for CO2 Storage
NASA Astrophysics Data System (ADS)
Watson, F. E.; Doster, F.
2017-12-01
In the context of geological CO2 storage, faults in the subsurface could affect storage security by acting as high permeability pathways which allow CO2 to flow upwards and away from the storage formation. To assess the likelihood of leakage through faults and the impacts faults might have on storage security numerical models are required. However, faults are complex geological features, usually consisting of a fault core surrounded by a highly fractured damage zone. A direct representation of these in a numerical model would require very fine grid resolution and would be computationally expensive. Here, we present the development of a reduced complexity model for fault flow using the vertically integrated formulation. This model captures the main features of the flow but does not require us to resolve the vertical dimension, nor the fault in the horizontal dimension, explicitly. It is thus less computationally expensive than full resolution models. Consequently, we can quickly model many realisations for parameter uncertainty studies of CO2 injection into faulted reservoirs. We develop the model based on explicitly simulating local 3D representations of faults for characteristic scenarios using the Matlab Reservoir Simulation Toolbox (MRST). We have assessed the impact of variables such as fault geometry, porosity and permeability on multiphase leakage rates.
The use of perturbed physics ensembles and emulation in palaeoclimate reconstruction (Invited)
NASA Astrophysics Data System (ADS)
Edwards, T. L.; Rougier, J.; Collins, M.
2010-12-01
Climate is a coherent process, with correlations and dependencies across space, time, and climate variables. However, reconstructions of palaeoclimate traditionally consider individual pieces of information independently, rather than making use of this covariance structure. Such reconstructions are at risk of being unphysical or at least implausible. Climate simulators such as General Circulation Models (GCMs), on the other hand, contain climate system theory in the form of dynamical equations describing physical processes, but are imperfect and computationally expensive. These two datasets - pointwise palaeoclimate reconstructions and climate simulator evaluations - contain complementary information, and a statistical synthesis can produce a palaeoclimate reconstruction that combines them while not ignoring their limitations. We use an ensemble of simulators with perturbed parameterisations, to capture the uncertainty about the simulator variant, and our method also accounts for structural uncertainty. The resulting reconstruction contains a full expression of climate uncertainty, not just pointwise but also jointly over locations. Such joint information is crucial in determining spatially extensive features such as isotherms, or the location of the tree-line. A second outcome of the statistical analysis is a refined distribution for the simulator parameters. In this way, information from palaeoclimate observations can be used directly in quantifying uncertainty in future climate projections. The main challenge is the expense of running a large scale climate simulator: each evaluation of an atmosphere-ocean GCM takes several months of computing time. The solution is to interpret the ensemble of evaluations within an 'emulator', which is a statistical model of the simulator. This technique has been used fruitfully in the statistical field of Computer Models for two decades, and has recently been applied in estimating uncertainty in future climate predictions in the UKCP09 (http://ukclimateprojections.defra.gov.uk). But only in the last couple of years has it developed to the point where it can be applied to large-scale spatial fields. We construct an emulator for the mid-Holocene (6000 calendar years BP) temperature anomaly over North America, at the resolution of our simulator (2.5° latitude by 3.75° longitude). This allows us to explore the behaviour of simulator variants that we could not afford to evaluate directly. We introduce the technique of 'co-emulation' of two versions of the climate simulator: the coupled atmosphere-ocean model HadCM3, and an equivalent with a simplified ocean, HadSM3. Running two different versions of a simulator is a powerful tool for increasing the information yield from a fixed budget of computer time, but the results must be combined statistically to account for the reduced fidelity of the quicker version. Emulators provide the appropriate framework.
NASA Astrophysics Data System (ADS)
Li, Xuesong; Northrop, William F.
2016-04-01
This paper describes a quantitative approach to approximate multiple scattering through an isotropic turbid slab based on Markov Chain theorem. There is an increasing need to utilize multiple scattering for optical diagnostic purposes; however, existing methods are either inaccurate or computationally expensive. Here, we develop a novel Markov Chain approximation approach to solve multiple scattering angular distribution (AD) that can accurately calculate AD while significantly reducing computational cost compared to Monte Carlo simulation. We expect this work to stimulate ongoing multiple scattering research and deterministic reconstruction algorithm development with AD measurements.
Down to the roughness scale assessment of piston-ring/liner contacts
NASA Astrophysics Data System (ADS)
Checo, H. M.; Jaramillo, A.; Ausas, R. F.; Jai, M.; Buscaglia, G. C.
2017-02-01
The effects of surface roughness in hydrodynamic bearings been accounted for through several approaches, the most widely used being averaging or stochastic techniques. With these the surface is not treated “as it is”, but by means of an assumed probability distribution for the roughness. The so called direct, deterministic or measured-surface simulation) solve the lubrication problem with realistic surfaces down to the roughness scale. This leads to expensive computational problems. Most researchers have tackled this problem considering non-moving surfaces and neglecting the ring dynamics to reduce the computational burden. What is proposed here is to solve the fully-deterministic simulation both in space and in time, so that the actual movement of the surfaces and the rings dynamics are taken into account. This simulation is much more complex than previous ones, as it is intrinsically transient. The feasibility of these fully-deterministic simulations is illustrated two cases: fully deterministic simulation of liner surfaces with diverse finishings (honed and coated bores) with constant piston velocity and load on the ring and also in real engine conditions.
Efficient and Robust Optimization for Building Energy Simulation
Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda
2016-01-01
Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell’s Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell’s method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell’s Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell’s Hybrid method presently used in HVACSIM+. PMID:27325907
Efficient and Robust Optimization for Building Energy Simulation.
Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda
2016-06-15
Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell's Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell's method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell's Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell's Hybrid method presently used in HVACSIM+.
Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
NASA Astrophysics Data System (ADS)
Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan
2014-03-01
We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.
Using Adaptive Mesh Refinment to Simulate Storm Surge
NASA Astrophysics Data System (ADS)
Mandli, K. T.; Dawson, C.
2012-12-01
Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.
NASA Technical Reports Server (NTRS)
Mendoza, John Cadiz
1995-01-01
The computational fluid dynamics code, PARC3D, is tested to see if its use of non-physical artificial dissipation affects the accuracy of its results. This is accomplished by simulating a shock-laminar boundary layer interaction and several hypersonic flight conditions of the Pegasus(TM) launch vehicle using full artificial dissipation, low artificial dissipation, and the Engquist filter. Before the filter is applied to the PARC3D code, it is validated in one-dimensional and two-dimensional form in a MacCormack scheme against the Riemann and convergent duct problem. For this explicit scheme, the filter shows great improvements in accuracy and computational time as opposed to the nonfiltered solutions. However, for the implicit PARC3D code it is found that the best estimate of the Pegasus experimental heat fluxes and surface pressures is the simulation utilizing low artificial dissipation and no filter. The filter does improve accuracy over the artificially dissipative case but at a computational expense greater than that achieved by the low artificial dissipation case which has no computational time penalty and shows better results. For the shock-boundary layer simulation, the filter does well in terms of accuracy for a strong impingement shock but not as well for weaker shock strengths. Furthermore, for the latter problem the filter reduces the required computational time to convergence by 18.7 percent.
Symplectic modeling of beam loading in electromagnetic cavities
Abell, Dan T.; Cook, Nathan M.; Webb, Stephen D.
2017-05-22
Simulating beam loading in radio frequency accelerating structures is critical for understanding higher-order mode effects on beam dynamics, such as beam break-up instability in energy recovery linacs. Full wave simulations of beam loading in radio frequency structures are computationally expensive, and while reduced models can ignore essential physics, it can be difficult to generalize. Here, we present a self-consistent algorithm derived from the least-action principle which can model an arbitrary number of cavity eigenmodes and with a generic beam distribution. It has been implemented in our new Open Library for Investigating Vacuum Electronics (OLIVE).
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Wilson, Jeffrey D.
1993-01-01
The three-dimensional, electromagnetic circuit analysis code, Micro-SOS, can be used to reduce expensive time-consuming experimental 'cold-testing' of traveling-wave tube (TWT) circuits. The frequency-phase dispersion characteristics and beam interaction impedance of a TunneLadder traveling-wave tube slow-wave structure were simulated using the code. When reasonable dimensional adjustments are made, computer results agree closely with experimental data. Modifications to the circuit geometry that would make the TunneLadder TWT easier to fabricate for higher frequency operation are explored.
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
Temperature scaling method for Markov chains.
Crosby, Lonnie D; Windus, Theresa L
2009-01-22
The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.
2015-12-02
simplification of the equations but at the expense of introducing modeling errors. We have shown that the Wick solutions have accuracy comparable to...the system of equations for the coefficients of formal power series solutions . Moreover, the structure of this propagator is seemingly universal, i.e...the problem of computing the numerical solution to kinetic partial differential equa- tions involving many phase variables. These types of equations
Thermochemical Modeling of Nonequilibrium Oxygen Flows
NASA Astrophysics Data System (ADS)
Neitzel, Kevin Joseph
The development of hypersonic vehicles leans heavily on computational simulation due to the high enthalpy flow conditions that are expensive and technically challenging to replicate experimentally. The accuracy of the nonequilibrium modeling in the computer simulations dictates the design margin that is required for the thermal protection system and flight dynamics. Previous hypersonic vehicles, such as Apollo and the Space Shuttle, were primarily concerned with re-entry TPS design. The strong flow conditions of re-entry, involving Mach numbers of 25, quickly dissociate the oxygen molecules in air. Sustained flight, hypersonic vehicles will be designed to operate in Mach number ranges of 5 to 10. The oxygen molecules will not quickly dissociate and will play an important role in the flow field behavior. The development of nonequilibrium models of oxygen is crucial for limiting modeling uncertainty. Thermochemical nonequilibrium modeling is investigated for oxygen flows. Specifically, the vibrational relaxation and dissociation behavior that dominate the nonequilibrium physics in this flight regime are studied in detail. The widely used two-temperature (2T) approach is compared to the higher fidelity and more computationally expensive state-to-state (STS) approach. This dissertation utilizes a wide range of rate sources, including newly available STS rates, to conduct a comprehensive study of modeling approaches for hypersonic nonequilibrium thermochemical modeling. Additionally, the physical accuracy of the computational methods are assessed by comparing the numerical results with available experimental data. The numerical results and experimental measurements present strong nonequilibrium, and even non-Boltzmann behavior in the vibrational energy mode for the sustained hypersonic flight regime. The STS approach is able to better capture the behavior observed in the experimental data, especially for stronger nonequilibrium conditions. Additionally, a reduced order model (ROM) modification to the 2T model is developed to improve the capability of the 2T approach framework.
Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A
2018-05-01
An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.
Stochastic optimization of GeantV code by use of genetic algorithms
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
Stochastic optimization of GeantV code by use of genetic algorithms
NASA Astrophysics Data System (ADS)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.
Stochastic optimization of GeantV code by use of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-29
In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less
24 CFR 990.170 - Computation of utilities expense level (UEL): Overview.
Code of Federal Regulations, 2010 CFR
2010-04-01
... level (UEL): Overview. 990.170 Section 990.170 Housing and Urban Development Regulations Relating to... Expenses § 990.170 Computation of utilities expense level (UEL): Overview. (a) General. The UEL for each... by the payable consumption level multiplied by the inflation factor. The UEL is expressed in terms of...
Molléro, Roch; Pennec, Xavier; Delingette, Hervé; Garny, Alan; Ayache, Nicholas; Sermesant, Maxime
2018-02-01
Personalised computational models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However, the simulation of a single heartbeat with a 3D cardiac electromechanical model can be long and computationally expensive, which makes some practical applications, such as the estimation of model parameters from clinical data (the personalisation), very slow. Here we introduce an original multifidelity approach between a 3D cardiac model and a simplified "0D" version of this model, which enables to get reliable (and extremely fast) approximations of the global behaviour of the 3D model using 0D simulations. We then use this multifidelity approximation to speed-up an efficient parameter estimation algorithm, leading to a fast and computationally efficient personalisation method of the 3D model. In particular, we show results on a cohort of 121 different heart geometries and measurements. Finally, an exploitable code of the 0D model with scripts to perform parameter estimation will be released to the community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cawkwell, Marc Jon
2016-09-09
The MC3 code is used to perform Monte Carlo simulations in the isothermal-isobaric ensemble (constant number of particles, temperature, and pressure) on molecular crystals. The molecules within the periodic simulation cell are treated as rigid bodies, alleviating the requirement for a complex interatomic potential. Intermolecular interactions are described using generic, atom-centered pair potentials whose parameterization is taken from the literature [D. E. Williams, J. Comput. Chem., 22, 1154 (2001)] and electrostatic interactions arising from atom-centered, fixed, point partial charges. The primary uses of the MC3 code are the computation of i) the temperature and pressure dependence of lattice parameters andmore » thermal expansion coefficients, ii) tensors of elastic constants and compliances via the Parrinello and Rahman’s fluctuation formula [M. Parrinello and A. Rahman, J. Chem. Phys., 76, 2662 (1982)], and iii) the investigation of polymorphic phase transformations. The MC3 code is written in Fortran90 and requires LAPACK and BLAS linear algebra libraries to be linked during compilation. Computationally expensive loops are accelerated using OpenMP.« less
Development of soft-sphere contact models for thermal heat conduction in granular flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, A. B.; Pannala, S.; Ma, Z.
2016-06-08
Conductive heat transfer to flowing particles occurs when two particles (or a particle and wall) come into contact. The direct conduction between the two bodies depends on the collision dynamics, namely the size of the contact area and the duration of contact. For soft-sphere discrete-particle simulations, it is computationally expensive to resolve the true collision time because doing so would require a restrictively small numerical time step. To improve the computational speed, it is common to increase the 'softness' of the material to artificially increase the collision time, but doing so affects the heat transfer. In this work, two physically-basedmore » correction terms are derived to compensate for the increased contact area and time stemming from artificial particle softening. By including both correction terms, the impact that artificial softening has on the conductive heat transfer is removed, thus enabling simulations at greatly reduced computational times without sacrificing physical accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca
Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less
Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package
NASA Astrophysics Data System (ADS)
Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack
Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.
Fast and accurate mock catalogue generation for low-mass galaxies
NASA Astrophysics Data System (ADS)
Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe
2016-06-01
We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.
Khan, Niaz Bahadur; Ibrahim, Zainah; Nguyen, Linh Tuan The; Javed, Muhammad Faisal; Jameel, Mohammed
2017-01-01
This study numerically investigates the vortex-induced vibration (VIV) of an elastically mounted rigid cylinder by using Reynolds-averaged Navier-Stokes (RANS) equations with computational fluid dynamic (CFD) tools. CFD analysis is performed for a fixed-cylinder case with Reynolds number (Re) = 104 and for a cylinder that is free to oscillate in the transverse direction and possesses a low mass-damping ratio and Re = 104. Previously, similar studies have been performed with 3-dimensional and comparatively expensive turbulent models. In the current study, the capability and accuracy of the RANS model are validated, and the results of this model are compared with those of detached eddy simulation, direct numerical simulation, and large eddy simulation models. All three response branches and the maximum amplitude are well captured. The 2-dimensional case with the RANS shear-stress transport k-w model, which involves minimal computational cost, is reliable and appropriate for analyzing the characteristics of VIV.
Response Matrix Monte Carlo for electron transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.
1990-11-01
A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less
Towards data warehousing and mining of protein unfolding simulation data.
Berrar, Daniel; Stahl, Frederic; Silva, Candida; Rodrigues, J Rui; Brito, Rui M M; Dubitzky, Werner
2005-10-01
The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.
Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.
2012-01-01
ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data on the backshell of a scaled model of the Orion Multi-Purpose Crew Vehicle (MPCV). Heat augmentation effects due to the presence of cavities and RCS jet firings were also investigated. The high quality data produced by this effort presents a new set of data which can be used to assess the performance of CFD methods. In this work, a hybrid LES/RANS model developed at North Carolina State University (NCSU) is used to simulate several runs from these experiments, and evaluate the performance of high fidelity methods as compared to more typical RANS models. .
Database Driven 6-DOF Trajectory Simulation for Debris Transport Analysis
NASA Technical Reports Server (NTRS)
West, Jeff
2008-01-01
Debris mitigation and risk assessment have been carried out by NASA and its contractors supporting Space Shuttle Return-To-Flight (RTF). As a part of this assessment, analysis of transport potential for debris that may be liberated from the vehicle or from pad facilities prior to tower clear (Lift-Off Debris) is being performed by MSFC. This class of debris includes plume driven and wind driven sources for which lift as well as drag are critical for the determination of the debris trajectory. As a result, NASA MSFC has a need for a debris transport or trajectory simulation that supports the computation of lift effect in addition to drag without the computational expense of fully coupled CFD with 6-DOF. A database driven 6-DOF simulation that uses aerodynamic force and moment coefficients for the debris shape that are interpolated from a database has been developed to meet this need. The design, implementation, and verification of the database driven six degree of freedom (6-DOF) simulation addition to the Lift-Off Debris Transport Analysis (LODTA) software are discussed in this paper.
A Deterministic Computational Procedure for Space Environment Electron Transport
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.
2010-01-01
A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.
NASA Astrophysics Data System (ADS)
Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.
NASA Astrophysics Data System (ADS)
Woldegiorgis, Befekadu Taddesse; van Griensven, Ann; Pereira, Fernando; Bauwens, Willy
2017-06-01
Most common numerical solutions used in CSTR-based in-stream water quality simulators are susceptible to instabilities and/or solution inconsistencies. Usually, they cope with instability problems by adopting computationally expensive small time steps. However, some simulators use fixed computation time steps and hence do not have the flexibility to do so. This paper presents a novel quasi-analytical solution for CSTR-based water quality simulators of an unsteady system. The robustness of the new method is compared with the commonly used fourth-order Runge-Kutta methods, the Euler method and three versions of the SWAT model (SWAT2012, SWAT-TCEQ, and ESWAT). The performance of each method is tested for different hypothetical experiments. Besides the hypothetical data, a real case study is used for comparison. The growth factors we derived as stability measures for the different methods and the R-factor—considered as a consistency measure—turned out to be very useful for determining the most robust method. The new method outperformed all the numerical methods used in the hypothetical comparisons. The application for the Zenne River (Belgium) shows that the new method provides stable and consistent BOD simulations whereas the SWAT2012 model is shown to be unstable for the standard daily computation time step. The new method unconditionally simulates robust solutions. Therefore, it is a reliable scheme for CSTR-based water quality simulators that use first-order reaction formulations.
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2013 CFR
2013-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2014 CFR
2014-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2011 CFR
2011-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2012 CFR
2012-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2010 CFR
2010-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
Lu, Chun-Yaung; Voter, Arthur F; Perez, Danny
2014-01-28
Deposition of solid material from solution is ubiquitous in nature. However, due to the inherent complexity of such systems, this process is comparatively much less understood than deposition from a gas or vacuum. Further, the accurate atomistic modeling of such systems is computationally expensive, therefore leaving many intriguing long-timescale phenomena out of reach. We present an atomistic/continuum hybrid method for extending the simulation timescales of dynamics at solid/liquid interfaces. We demonstrate the method by simulating the deposition of Ag on Ag (001) from solution with a significant speedup over standard MD. The results reveal specific features of diffusive deposition dynamics, such as a dramatic increase in the roughness of the film.
CABS-flex 2.0: a web server for fast simulations of flexibility of protein structures.
Kuriata, Aleksander; Gierut, Aleksandra Maria; Oleniecki, Tymoteusz; Ciemny, Maciej Pawel; Kolinski, Andrzej; Kurcinski, Mateusz; Kmiecik, Sebastian
2018-05-14
Classical simulations of protein flexibility remain computationally expensive, especially for large proteins. A few years ago, we developed a fast method for predicting protein structure fluctuations that uses a single protein model as the input. The method has been made available as the CABS-flex web server and applied in numerous studies of protein structure-function relationships. Here, we present a major update of the CABS-flex web server to version 2.0. The new features include: extension of the method to significantly larger and multimeric proteins, customizable distance restraints and simulation parameters, contact maps and a new, enhanced web server interface. CABS-flex 2.0 is freely available at http://biocomp.chem.uw.edu.pl/CABSflex2.
Graphical Models for Ordinal Data
Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji
2014-01-01
A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
NASA Astrophysics Data System (ADS)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-01-01
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
Multiobjective Optimal Control Methodology for the Analysis of Certain Sociodynamic Problems
2009-03-01
but less expensive in both time and memory. 137 References [1] R. Albert and A-L Barabasi. Statistical mechanics of complex networks. Reviews of Modern...Review, E(51):4282–4286, 1995. [24] D. Helbing, P. Molnar, and F. Schweitzer . Computer simulation of pedestrian dynamics and trail formation. May 1998...Patterson AFB, OH, 2001. [49] F. Schweitzer . Brownian Agents and Active Particles. Springer, Santa Fe, NM, 2003. [50] P. Sen. Complexities of social
Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2018-02-01
The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.
NASA Astrophysics Data System (ADS)
Destefano, Anthony; Heerikhuisen, Jacob
2015-04-01
Fully 3D particle simulations can be a computationally and memory expensive task, especially when high resolution grid cells are required. The problem becomes further complicated when parallelization is needed. In this work we focus on computational methods to solve these difficulties. Hilbert curves are used to map the 3D particle space to the 1D contiguous memory space. This method of organization allows for minimized cache misses on the GPU as well as a sorted structure that is equivalent to an octal tree data structure. This type of sorted structure is attractive for uses in adaptive mesh implementations due to the logarithm search time. Implementations using the Message Passing Interface (MPI) library and NVIDIA's parallel computing platform CUDA will be compared, as MPI is commonly used on server nodes with many CPU's. We will also compare static grid structures with those of adaptive mesh structures. The physical test bed will be simulating heavy interstellar atoms interacting with a background plasma, the heliosphere, simulated from fully consistent coupled MHD/kinetic particle code. It is known that charge exchange is an important factor in space plasmas, specifically it modifies the structure of the heliosphere itself. We would like to thank the Alabama Supercomputer Authority for the use of their computational resources.
Computer Simulation of Microwave Devices
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1997-01-01
The accurate simulation of cold-test results including dispersion, on-axis beam interaction impedance, and attenuation of a helix traveling-wave tube (TWT) slow-wave circuit using the three-dimensional code MAFIA (Maxwell's Equations Solved by the Finite Integration Algorithm) was demonstrated for the first time. Obtaining these results is a critical step in the design of TWT's. A well-established procedure to acquire these parameters is to actually build and test a model or a scale model of the circuit. However, this procedure is time-consuming and expensive, and it limits freedom to examine new variations to the basic circuit. These limitations make the need for computational methods crucial since they can lower costs, reduce tube development time, and lessen limitations on novel designs. Computer simulation has been used to accurately obtain cold-test parameters for several slow-wave circuits. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. A new computer modeling technique developed at the NASA Lewis Research Center overcomes these difficulties. The MAFIA three-dimensional mesh for a C-band helix slow-wave circuit is shown.
A Lumped Computational Model for Sodium Sulfur Battery Analysis
NASA Astrophysics Data System (ADS)
Wu, Fan
Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.
Komarov, Ivan; D'Souza, Roshan M
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations
NASA Astrophysics Data System (ADS)
Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.
2017-12-01
Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.
Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability
NASA Astrophysics Data System (ADS)
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2016-06-01
Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.
NASA Astrophysics Data System (ADS)
Kerschbaum, M.; Hopmann, C.
2016-06-01
The computationally efficient simulation of the progressive damage behaviour of continuous fibre reinforced plastics is still a challenging task with currently available computer aided engineering methods. This paper presents an original approach for an energy based continuum damage model which accounts for stress-/strain nonlinearities, transverse and shear stress interaction phenomena, quasi-plastic shear strain components, strain rate effects, regularised damage evolution and consideration of load reversal effects. The physically based modelling approach enables experimental determination of all parameters on ply level to avoid expensive inverse analysis procedures. The modelling strategy, implementation and verification of this model using commercially available explicit finite element software are detailed. The model is then applied to simulate the impact and penetration of carbon fibre reinforced cross-ply specimens with variation of the impact speed. The simulation results show that the presented approach enables a good representation of the force-/displacement curves and especially well agreement with the experimentally observed fracture patterns. In addition, the mesh dependency of the results were assessed for one impact case showing only very little change of the simulation results which emphasises the general applicability of the presented method.
Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento
Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximatemore » algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.« less
NASA Astrophysics Data System (ADS)
Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin
2018-01-01
The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce CaloGAN, a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter and achieve speedup factors comparable to or better than existing full simulation techniques on CPU (100 ×-1000 × ) and even faster on GPU (up to ˜105× ). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons, and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future.
Dynamic adaptive chemistry for turbulent flame simulations
NASA Astrophysics Data System (ADS)
Yang, Hongtao; Ren, Zhuyin; Lu, Tianfeng; Goldin, Graham M.
2013-02-01
The use of large chemical mechanisms in flame simulations is computationally expensive due to the large number of chemical species and the wide range of chemical time scales involved. This study investigates the use of dynamic adaptive chemistry (DAC) for efficient chemistry calculations in turbulent flame simulations. DAC is achieved through the directed relation graph (DRG) method, which is invoked for each computational fluid dynamics cell/particle to obtain a small skeletal mechanism that is valid for the local thermochemical condition. Consequently, during reaction fractional steps, one needs to solve a smaller set of ordinary differential equations governing chemical kinetics. Test calculations are performed in a partially-stirred reactor (PaSR) involving both methane/air premixed and non-premixed combustion with chemistry described by the 53-species GRI-Mech 3.0 mechanism and the 129-species USC-Mech II mechanism augmented with recently updated NO x pathways, respectively. Results show that, in the DAC approach, the DRG reduction threshold effectively controls the incurred errors in the predicted temperature and species concentrations. The computational saving achieved by DAC increases with the size of chemical kinetic mechanisms. For the PaSR simulations, DAC achieves a speedup factor of up to three for GRI-Mech 3.0 and up to six for USC-Mech II in simulation time, while at the same time maintaining good accuracy in temperature and species concentration predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadoura, Ahmad, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Sun, Shuyu, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Siripatana, Adil, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa
In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercriticalmore » isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH{sub 4}, N{sub 2}, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO{sub 2} and C{sub 2} H{sub 6}.« less
Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI
NASA Astrophysics Data System (ADS)
Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan
2016-10-01
Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.
Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro
2003-04-15
Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2012 CFR
2012-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2012-10-01 2012-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2013 CFR
2013-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2013-10-01 2013-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2011 CFR
2011-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2011-10-01 2011-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2014 CFR
2014-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2014-10-01 2014-10-01 false Computers and data processing equipment (account...
49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).
Code of Federal Regulations, 2010 CFR
2010-10-01
... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account...
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Chamis, Christos C. (Technical Monitor)
2003-01-01
Computational simulation results can give the prediction of damage growth and progression and fracture toughness of composite structures. The experimental data from literature provide environmental effects on the fracture behavior of metallic or fiber composite structures. However, the traditional experimental methods to analyze the influence of the imposed conditions are expensive and time consuming. This research used the CODSTRAN code to model the temperature effects, scaling effects and the loading effects of fiber/braided composite specimens with and without fiber-optic sensors on the damage initiation and energy release rates. The load-displacement relationship and fracture toughness assessment approach is compared with the test results from literature and it is verified that the computational simulation, with the use of established material modeling and finite element modules, adequately tracks the changes of fracture toughness and subsequent fracture propagation for any fiber/braided composite structure due to the change of fiber orientations, presence of large diameter optical fibers, and any loading conditions.
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Chamis, Christos C. (Technical Monitor)
2003-01-01
Computational simulation results can give the prediction of damage growth and progression and fracture toughness of composite structures. The experimental data from literature provide environmental effects on the fracture behavior of metallic or fiber composite structures. However, the traditional experimental methods to analyze the influence of the imposed conditions are expensive and time consuming. This research used the CODSTRAN code to model the temperature effects, scaling effects and the loading effects of fiberbraided composite specimens with and without fiber-optic sensors on the damage initiation and energy release rates. The load-displacement relationship and fracture toughness assessment approach is compared with the test results from literature and it is verified that the computational simulation, with the use of established material modeling and finite element modules, adequately tracks the changes of fracture toughness and subsequent fracture propagation for any fiberbraided composite structure due to the change of fiber orientations, presence of large diameter optical fibers, and any loading conditions.
Hardware-in-the-Loop Testing of Utility-Scale Wind Turbine Generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schkoda, Ryan; Fox, Curtiss; Hadidi, Ramtin
2016-01-26
Historically, wind turbine prototypes were tested in the field, which was--and continues to be--a slow and expensive process. As a result, wind turbine dynamometer facilities were developed to provide a more cost-effective alternative to field testing. New turbine designs were tested and the design models were validated using dynamometers to drive the turbines in a controlled environment. Over the years, both wind turbine dynamometer testing and computer technology have matured and improved, and the two are now being joined to provide hardware-in-the-loop (HIL) testing. This type of testing uses a computer to simulate the items that are missing from amore » dynamometer test, such as grid stiffness, voltage, frequency, rotor, and hub. Furthermore, wind input and changing electric grid conditions can now be simulated in real time. This recent advance has greatly increased the utility of dynamometer testing for the development of wind turbine systems.« less
A Machine Learns to Predict the Stability of Tightly Packed Planetary Systems
NASA Astrophysics Data System (ADS)
Tamayo, Daniel; Silburt, Ari; Valencia, Diana; Menou, Kristen; Ali-Dib, Mohamad; Petrovich, Cristobal; Huang, Chelsea X.; Rein, Hanno; van Laerhoven, Christa; Paradise, Adiv; Obertas, Alysa; Murray, Norman
2016-12-01
The requirement that planetary systems be dynamically stable is often used to vet new discoveries or set limits on unconstrained masses or orbital elements. This is typically carried out via computationally expensive N-body simulations. We show that characterizing the complicated and multi-dimensional stability boundary of tightly packed systems is amenable to machine-learning methods. We find that training an XGBoost machine-learning algorithm on physically motivated features yields an accurate classifier of stability in packed systems. On the stability timescale investigated (107 orbits), it is three orders of magnitude faster than direct N-body simulations. Optimized machine-learning classifiers for dynamical stability may thus prove useful across the discipline, e.g., to characterize the exoplanet sample discovered by the upcoming Transiting Exoplanet Survey Satellite. This proof of concept motivates investing computational resources to train algorithms capable of predicting stability over longer timescales and over broader regions of phase space.
A Coarse-Grained Protein Model in a Water-like Solvent
NASA Astrophysics Data System (ADS)
Sharma, Sumit; Kumar, Sanat K.; Buldyrev, Sergey V.; Debenedetti, Pablo G.; Rossky, Peter J.; Stanley, H. Eugene
2013-05-01
Simulations employing an explicit atom description of proteins in solvent can be computationally expensive. On the other hand, coarse-grained protein models in implicit solvent miss essential features of the hydrophobic effect, especially its temperature dependence, and have limited ability to capture the kinetics of protein folding. We propose a free space two-letter protein (``H-P'') model in a simple, but qualitatively accurate description for water, the Jagla model, which coarse-grains water into an isotropically interacting sphere. Using Monte Carlo simulations, we design protein-like sequences that can undergo a collapse, exposing the ``Jagla-philic'' monomers to the solvent, while maintaining a ``hydrophobic'' core. This protein-like model manifests heat and cold denaturation in a manner that is reminiscent of proteins. While this protein-like model lacks the details that would introduce secondary structure formation, we believe that these ideas represent a first step in developing a useful, but computationally expedient, means of modeling proteins.
Al-Sadoon, Mohammed A. G.; Zuid, Abdulkareim; Jones, Stephen M. R.; Noras, James M.
2017-01-01
This paper proposes a new low complexity angle of arrival (AOA) method for signal direction estimation in multi-element smart wireless communication systems. The new method estimates the AOAs of the received signals directly from the received signals with significantly reduced complexity since it does not need to construct the correlation matrix, invert the matrix or apply eigen-decomposition, which are computationally expensive. A mathematical model of the proposed method is illustrated and then verified using extensive computer simulations. Both linear and circular sensors arrays are studied using various numerical examples. The method is systematically compared with other common and recently introduced AOA methods over a wide range of scenarios. The simulated results show that the new method has several advantages in terms of reduced complexity and improved accuracy under the assumptions of correlated signals and limited numbers of snapshots. PMID:29140313
Al-Sadoon, Mohammed A G; Ali, Nazar T; Dama, Yousf; Zuid, Abdulkareim; Jones, Stephen M R; Abd-Alhameed, Raed A; Noras, James M
2017-11-15
This paper proposes a new low complexity angle of arrival (AOA) method for signal direction estimation in multi-element smart wireless communication systems. The new method estimates the AOAs of the received signals directly from the received signals with significantly reduced complexity since it does not need to construct the correlation matrix, invert the matrix or apply eigen-decomposition, which are computationally expensive. A mathematical model of the proposed method is illustrated and then verified using extensive computer simulations. Both linear and circular sensors arrays are studied using various numerical examples. The method is systematically compared with other common and recently introduced AOA methods over a wide range of scenarios. The simulated results show that the new method has several advantages in terms of reduced complexity and improved accuracy under the assumptions of correlated signals and limited numbers of snapshots.
PIC Simulations of Hypersonic Plasma Instabilities
NASA Astrophysics Data System (ADS)
Niehoff, D.; Ashour-Abdalla, M.; Niemann, C.; Decyk, V.; Schriver, D.; Clark, E.
2013-12-01
The plasma sheaths formed around hypersonic aircraft (Mach number, M > 10) are relatively unexplored and of interest today to both further the development of new technologies and solve long-standing engineering problems. Both laboratory experiments and analytical/numerical modeling are required to advance the understanding of these systems; it is advantageous to perform these tasks in tandem. There has already been some work done to study these plasmas by experiments that create a rapidly expanding plasma through ablation of a target with a laser. In combination with a preformed magnetic field, this configuration leads to a magnetic "bubble" formed behind the front as particles travel at about Mach 30 away from the target. Furthermore, the experiment was able to show the generation of fast electrons which could be due to instabilities on electron scales. To explore this, future experiments will have more accurate diagnostics capable of observing time- and length-scales below typical ion scales, but simulations are a useful tool to explore these plasma conditions theoretically. Particle in Cell (PIC) simulations are necessary when phenomena are expected to be observed at these scales, and also have the advantage of being fully kinetic with no fluid approximations. However, if the scales of the problem are not significantly below the ion scales, then the initialization of the PIC simulation must be very carefully engineered to avoid unnecessary computation and to select the minimum window where structures of interest can be studied. One method of doing this is to seed the simulation with either experiment or ion-scale simulation results. Previous experiments suggest that a useful configuration for studying hypersonic plasma configurations is a ring of particles rapidly expanding transverse to an external magnetic field, which has been simulated on the ion scale with an ion-hybrid code. This suggests that the PIC simulation should have an equivalent configuration; however, modeling a plasma expanding radially in every direction is computationally expensive. In order to reduce the computational expense, we use a radial density profile from the hybrid simulation results to seed a self-consistent PIC simulation in one direction (x), while creating a current in the direction (y) transverse to both the drift velocity and the magnetic field (z) to create the magnetic bubble observed in experiment. The simulation will be run in two spatial dimensions but retain three velocity dimensions, and the results will be used to explore the growth of micro-instabilities present in hypersonic plasmas in the high-density region as it moves through the simulation box. This will still require a significantly large box in order to compare with experiment, as the experiments are being performed over distances of 104 λDe and durations of 105 ωpe-1.
Multiple point statistical simulation using uncertain (soft) conditional data
NASA Astrophysics Data System (ADS)
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
Systems modeling and simulation applications for critical care medicine
2012-01-01
Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area. PMID:22703718
Economical Unsteady High-Fidelity Aerodynamics for Structural Optimization with a Flutter Constraint
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Stanford, Bret K.
2017-01-01
Structural optimization with a flutter constraint for a vehicle designed to fly in the transonic regime is a particularly difficult task. In this speed range, the flutter boundary is very sensitive to aerodynamic nonlinearities, typically requiring high-fidelity Navier-Stokes simulations. However, the repeated application of unsteady computational fluid dynamics to guide an aeroelastic optimization process is very computationally expensive. This expense has motivated the development of methods that incorporate aspects of the aerodynamic nonlinearity, classical tools of flutter analysis, and more recent methods of optimization. While it is possible to use doublet lattice method aerodynamics, this paper focuses on the use of an unsteady high-fidelity aerodynamic reduced order model combined with successive transformations that allows for an economical way of utilizing high-fidelity aerodynamics in the optimization process. This approach is applied to the common research model wing structural design. As might be expected, the high-fidelity aerodynamics produces a heavier wing than that optimized with doublet lattice aerodynamics. It is found that the optimized lower skin of the wing using high-fidelity aerodynamics differs significantly from that using doublet lattice aerodynamics.
Telehealth innovations in health education and training.
Conde, José G; De, Suvranu; Hall, Richard W; Johansen, Edward; Meglan, Dwight; Peng, Grace C Y
2010-01-01
Telehealth applications are increasingly important in many areas of health education and training. In addition, they will play a vital role in biomedical research and research training by facilitating remote collaborations and providing access to expensive/remote instrumentation. In order to fulfill their true potential to leverage education, training, and research activities, innovations in telehealth applications should be fostered across a range of technology fronts, including online, on-demand computational models for simulation; simplified interfaces for software and hardware; software frameworks for simulations; portable telepresence systems; artificial intelligence applications to be applied when simulated human patients are not options; and the development of more simulator applications. This article presents the results of discussion on potential areas of future development, barries to overcome, and suggestions to translate the promise of telehealth applications into a transformed environment of training, education, and research in the health sciences.
Soapy: an adaptive optics simulation written purely in Python for rapid concept development
NASA Astrophysics Data System (ADS)
Reeves, Andrew
2016-07-01
Soapy is a newly developed Adaptive Optics (AO) simulation which aims be a flexible and fast to use tool-kit for many applications in the field of AO. It is written purely in the Python language, adding to and taking advantage of the already rich ecosystem of scientific libraries and programs. The simulation has been designed to be extremely modular, such that each component can be used stand-alone for projects which do not require a full end-to-end simulation. Ease of use, modularity and code clarity have been prioritised at the expense of computational performance. Though this means the code is not yet suitable for large studies of Extremely Large Telescope AO systems, it is well suited to education, exploration of new AO concepts and investigations of current generation telescopes.
Time-Spectral Rotorcraft Simulations on Overset Grids
NASA Technical Reports Server (NTRS)
Leffell, Joshua I.; Murman, Scott M.; Pulliam, Thomas H.
2014-01-01
The Time-Spectral method is derived as a Fourier collocation scheme and applied to NASA's overset Reynolds-averaged Navier-Stokes (RANS) solver OVERFLOW. The paper outlines the Time-Spectral OVERFLOWimplementation. Successful low-speed laminar plunging NACA 0012 airfoil simulations demonstrate the capability of the Time-Spectral method to resolve the highly-vortical wakes typical of more expensive three-dimensional rotorcraft configurations. Dealiasing, in the form of spectral vanishing viscosity (SVV), facilitates the convergence of Time-Spectral calculations of high-frequency flows. Finally, simulations of the isolated V-22 Osprey tiltrotor for both hover and forward (edgewise) flight validate the three-dimensional Time-Spectral OVERFLOW implementation. The Time-Spectral hover simulation matches the time-accurate calculation using a single harmonic. Significantly more temporal modes and SVV are required to accurately compute the forward flight case because of its more active, high-frequency wake.
Plasmonic resonances of nanoparticles from large-scale quantum mechanical simulations
NASA Astrophysics Data System (ADS)
Zhang, Xu; Xiang, Hongping; Zhang, Mingliang; Lu, Gang
2017-09-01
Plasmonic resonance of metallic nanoparticles results from coherent motion of its conduction electrons, driven by incident light. For the nanoparticles less than 10 nm in diameter, localized surface plasmonic resonances become sensitive to the quantum nature of the conduction electrons. Unfortunately, quantum mechanical simulations based on time-dependent Kohn-Sham density functional theory are computationally too expensive to tackle metal particles larger than 2 nm. Herein, we introduce the recently developed time-dependent orbital-free density functional theory (TD-OFDFT) approach which enables large-scale quantum mechanical simulations of plasmonic responses of metallic nanostructures. Using TD-OFDFT, we have performed quantum mechanical simulations to understand size-dependent plasmonic response of Na nanoparticles and plasmonic responses in Na nanoparticle dimers and trimers. An outlook of future development of the TD-OFDFT method is also presented.
Comparison of DAC and MONACO DSMC Codes with Flat Plate Simulation
NASA Technical Reports Server (NTRS)
Padilla, Jose F.
2010-01-01
Various implementations of the direct simulation Monte Carlo (DSMC) method exist in academia, government and industry. By comparing implementations, deficiencies and merits of each can be discovered. This document reports comparisons between DSMC Analysis Code (DAC) and MONACO. DAC is NASA's standard DSMC production code and MONACO is a research DSMC code developed in academia. These codes have various differences; in particular, they employ distinct computational grid definitions. In this study, DAC and MONACO are compared by having each simulate a blunted flat plate wind tunnel test, using an identical volume mesh. Simulation expense and DSMC metrics are compared. In addition, flow results are compared with available laboratory data. Overall, this study revealed that both codes, excluding grid adaptation, performed similarly. For parallel processing, DAC was generally more efficient. As expected, code accuracy was mainly dependent on physical models employed.
NASA Astrophysics Data System (ADS)
Heberling, Brian
Computational fluid dynamics (CFD) simulations can offer a detailed view of the complex flow fields within an axial compressor and greatly aid the design process. However, the desire for quick turnaround times raises the question of how exact the model must be. At design conditions, steady CFD simulating an isolated blade row can accurately predict the performance of a rotor. However, as a compressor is throttled and mass flow rate decreased, axial flow becomes weaker making the capturing of unsteadiness, wakes, or other flow features more important. The unsteadiness of the tip clearance flow and upstream blade wake can have a significant impact on a rotor. At off-design conditions, time-accurate simulations or modeling multiple blade rows can become necessary in order to receive accurate performance predictions. Unsteady and multi- bladerow simulations are computationally expensive, especially when used in conjunction. It is important to understand which features are important to model in order to accurately capture a compressor's performance. CFD simulations of a transonic axial compressor throttling from the design point to stall are presented. The importance of capturing the unsteadiness of the rotor tip clearance flow versus capturing upstream blade-row interactions is examined through steady and unsteady, single- and multi-bladerow computations. It is shown that there are significant differences at near stall conditions between the different types of simulations.
HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.
Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is themore » inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.« less
VCSim3: a VR simulator for cardiovascular interventions.
Korzeniowski, Przemyslaw; White, Ruth J; Bello, Fernando
2018-01-01
Effective and safe performance of cardiovascular interventions requires excellent catheter/guidewire manipulation skills. These skills are currently mainly gained through an apprenticeship on real patients, which may not be safe or cost-effective. Computer simulation offers an alternative for core skills training. However, replicating the physical behaviour of real instruments navigated through blood vessels is a challenging task. We have developed VCSim3-a virtual reality simulator for cardiovascular interventions. The simulator leverages an inextensible Cosserat rod to model virtual catheters and guidewires. Their mechanical properties were optimized with respect to their real counterparts scanned in a silicone phantom using X-ray CT imaging. The instruments are manipulated via a VSP haptic device. Supporting solutions such as fluoroscopic visualization, contrast flow propagation, cardiac motion, balloon inflation, and stent deployment, enable performing a complete angioplasty procedure. We present detailed results of simulation accuracy of the virtual instruments, along with their computational performance. In addition, the results of a preliminary face and content validation study conveyed on a group of 17 interventional radiologists are given. VR simulation of cardiovascular procedure can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. VCSim3 is still a prototype, yet the initial results indicate that it provides promising foundations for further development.
Stochastic hybrid systems for studying biochemical processes.
Singh, Abhyudai; Hespanha, João P
2010-11-13
Many protein and mRNA species occur at low molecular counts within cells, and hence are subject to large stochastic fluctuations in copy numbers over time. Development of computationally tractable frameworks for modelling stochastic fluctuations in population counts is essential to understand how noise at the cellular level affects biological function and phenotype. We show that stochastic hybrid systems (SHSs) provide a convenient framework for modelling the time evolution of population counts of different chemical species involved in a set of biochemical reactions. We illustrate recently developed techniques that allow fast computations of the statistical moments of the population count, without having to run computationally expensive Monte Carlo simulations of the biochemical reactions. Finally, we review different examples from the literature that illustrate the benefits of using SHSs for modelling biochemical processes.
RTSPM: real-time Linux control software for scanning probe microscopy.
Chandrasekhar, V; Mehta, M M
2013-01-01
Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.
Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea
2015-09-01
The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less
Study on photon transport problem based on the platform of molecular optical simulation environment.
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency.
Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (S P n), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737
Lithographic image simulation for the 21st century with 19th-century tools
NASA Astrophysics Data System (ADS)
Gordon, Ronald L.; Rosenbluth, Alan E.
2004-01-01
Simulation of lithographic processes in semiconductor manufacturing has gone from a crude learning tool 20 years ago to a critical part of yield enhancement strategy today. Although many disparate models, championed by equally disparate communities, exist to describe various photoresist development phenomena, these communities would all agree that the one piece of the simulation picture that can, and must, be computed accurately is the image intensity in the photoresist. The imaging of a photomask onto a thin-film stack is one of the only phenomena in the lithographic process that is described fully by well-known, definitive physical laws. Although many approximations are made in the derivation of the Fourier transform relations between the mask object, the pupil, and the image, these and their impacts are well-understood and need little further investigation. The imaging process in optical lithography is modeled as a partially-coherent, Kohler illumination system. As Hopkins has shown, we can separate the computation into 2 pieces: one that takes information about the illumination source, the projection lens pupil, the resist stack, and the mask size or pitch, and the other that only needs the details of the mask structure. As the latter piece of the calculation can be expressed as a fast Fourier transform, it is the first piece that dominates. This piece involves computation of a potentially large number of numbers called Transmission Cross-Coefficients (TCCs), which are correlations of the pupil function weighted with the illumination intensity distribution. The advantage of performing the image calculations this way is that the computation of these TCCs represents an up-front cost, not to be repeated if one is only interested in changing the mask features, which is the case in Model-Based Optical Proximity Correction (MBOPC). The down side, however, is that the number of these expensive double integrals that must be performed increases as the square of the mask unit cell area; this number can cause even the fastest computers to balk if one needs to study medium- or long-range effects. One can reduce this computational burden by approximating with a smaller area, but accuracy is usually a concern, especially when building a model that will purportedly represent a manufacturing process. This work will review the current methodologies used to simulate the intensity distribution in air above the resist and address the above problems. More to the point, a methodology has been developed to eliminate the expensive numerical integrations in the TCC calculations, as the resulting integrals in many cases of interest can be either evaluated analytically, or replaced by analytical functions accurate to within machine precision. With the burden of computing these numbers lightened, more accurate representations of the image field can be realized, and better overall models are then possible.
Current CFD Practices in Launch Vehicle Applications
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, Cetin
2012-01-01
The quest for sustained space exploration will require the development of advanced launch vehicles, and efficient and reliable operating systems. Development of launch vehicles via test-fail-fix approach is very expensive and time consuming. For decision making, modeling and simulation (M&S) has played increasingly important roles in many aspects of launch vehicle development. It is therefore essential to develop and maintain most advanced M&S capability. More specifically computational fluid dynamics (CFD) has been providing critical data for developing launch vehicles complementing expensive testing. During the past three decades CFD capability has increased remarkably along with advances in computer hardware and computing technology. However, most of the fundamental CFD capability in launch vehicle applications is derived from the past advances. Specific gaps in the solution procedures are being filled primarily through "piggy backed" efforts.on various projects while solving today's problems. Therefore, some of the advanced capabilities are not readily available for various new tasks, and mission-support problems are often analyzed using ad hoc approaches. The current report is intended to present our view on state-of-the-art (SOA) in CFD and its shortcomings in support of space transport vehicle development. Best practices in solving current issues will be discussed using examples from ascending launch vehicles. Some of the pacing will be discussed in conjunction with these examples.
A universal preconditioner for simulating condensed phase materials.
Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor
2016-04-28
We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.
A universal preconditioner for simulating condensed phase materials
NASA Astrophysics Data System (ADS)
Packwood, David; Kermode, James; Mones, Letif; Bernstein, Noam; Woolley, John; Gould, Nicholas; Ortner, Christoph; Csányi, Gábor
2016-04-01
We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor of two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.
Massively parallel simulations of relativistic fluid dynamics on graphics processing units with CUDA
NASA Astrophysics Data System (ADS)
Bazow, Dennis; Heinz, Ulrich; Strickland, Michael
2018-04-01
Relativistic fluid dynamics is a major component in dynamical simulations of the quark-gluon plasma created in relativistic heavy-ion collisions. Simulations of the full three-dimensional dissipative dynamics of the quark-gluon plasma with fluctuating initial conditions are computationally expensive and typically require some degree of parallelization. In this paper, we present a GPU implementation of the Kurganov-Tadmor algorithm which solves the 3 + 1d relativistic viscous hydrodynamics equations including the effects of both bulk and shear viscosities. We demonstrate that the resulting CUDA-based GPU code is approximately two orders of magnitude faster than the corresponding serial implementation of the Kurganov-Tadmor algorithm. We validate the code using (semi-)analytic tests such as the relativistic shock-tube and Gubser flow.
Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hause, Benjamin; Parker, Scott
2012-10-01
We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.
Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J
2008-02-01
Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial "break in" period of the simulation.
Regression with Small Data Sets: A Case Study using Code Surrogates in Additive Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamath, C.; Fan, Y. J.
There has been an increasing interest in recent years in the mining of massive data sets whose sizes are measured in terabytes. While it is easy to collect such large data sets in some application domains, there are others where collecting even a single data point can be very expensive, so the resulting data sets have only tens or hundreds of samples. For example, when complex computer simulations are used to understand a scientific phenomenon, we want to run the simulation for many different values of the input parameters and analyze the resulting output. The data set relating the simulationmore » inputs and outputs is typically quite small, especially when each run of the simulation is expensive. However, regression techniques can still be used on such data sets to build an inexpensive \\surrogate" that could provide an approximate output for a given set of inputs. A good surrogate can be very useful in sensitivity analysis, uncertainty analysis, and in designing experiments. In this paper, we compare different regression techniques to determine how well they predict melt-pool characteristics in the problem domain of additive manufacturing. Our analysis indicates that some of the commonly used regression methods do perform quite well even on small data sets.« less
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D
2015-07-15
Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
The design of a light aircraft automated dropsonde launcher
NASA Astrophysics Data System (ADS)
Pasken, Gregory R.
The use of the National Center for Atmospheric Research's dropsonde system is currently limited to large NASA, NSF and NOAA operated research aircraft, which are expensive to fly and are over-subscribed. Designing a new dropsonde system for a smaller, less expensive to operate light aircraft will make the dropsonde system available to a much wider research community. To test this concept, a dropsonde launch system designed to fit in the cargo door of a twin engine Piper Seminole is developed and tested. Although the launch system for the light aircraft dropsonde launcher has gone through many designs, a prototype is built and tested from the final design using Tetra for the computation fluid dynamics and stress testing, as Tetra has material properties for solids as well as fluids. The design is further tested in the wind tunnel. These tests show that the new design is a viable alternative for light aircraft, thus allowing dropsondes to be more widely used. The results of the ABAQUS, SC Tetra simulations, and the wind tunnel results of the final design are covered and discussed. The settings used for the ABAQUS and SC Tetra simulations are described in detail. ABAQUS simulations are conducted to perform stress testing and SC Tetra is used for CFD simulations. The SC Tetra simulations provide a more comprehensive picture of the design, as SC Tetra is able to perform the stress testing, as well as pressure testing, allowing for more accurate results. The limitations of ABAQUS simulations require numerous assumptions for loading that may or may not be realistic.
NASA Astrophysics Data System (ADS)
Hoang, Tuan L.; Nazarov, Roman; Kang, Changwoo; Fan, Jiangyuan
2018-07-01
Under the multi-ion irradiation conditions present in accelerated material-testing facilities or fission/fusion nuclear reactors, the combined effects of atomic displacements with radiation products may induce complex synergies in the structural materials. However, limited access to multi-ion irradiation facilities and the lack of computational models capable of simulating the evolution of complex defects and their synergies make it difficult to understand the actual physical processes taking place in the materials under these extreme conditions. In this paper, we propose the application of pulsed single/dual-beam irradiation as replacements for the expensive steady triple-beam irradiation to study radiation damages in materials under multi-ion irradiation.
Ferruleless coupled-cavity traveling-wave tube cold-test characteristics simulated with micro-SOS
NASA Technical Reports Server (NTRS)
Schroeder, Dana L.; Wilson, Jeffrey D.
1993-01-01
The three-dimensional, electromagnetic circuit analysis code, Micro-SOS, can be used to reduce expensive and time consuming experimental 'cold-testing' of traveling-wave tube (TWT) circuits. The frequency-phase dispersion and beam interaction impedance characteristics of a ferruleless coupled-cavity traveling-wave tube slow-wave circuit were simulated using the code. Computer results agree closely with experimental data. Variations in the cavity geometry dimensions of period length and gap-to-period ratio were modeled. These variations can be used in velocity taper designs to reduce the radiofrequency (RF) phase velocity in synchronism with the decelerating electron beam. Such circuit designs can result in enhanced TWT power and efficiency.
Tao, Ran; Zeng, Donglin; Lin, Dan-Yu
2017-01-01
In modern epidemiological and clinical studies, the covariates of interest may involve genome sequencing, biomarker assay, or medical imaging and thus are prohibitively expensive to measure on a large number of subjects. A cost-effective solution is the two-phase design, under which the outcome and inexpensive covariates are observed for all subjects during the first phase and that information is used to select subjects for measurements of expensive covariates during the second phase. For example, subjects with extreme values of quantitative traits were selected for whole-exome sequencing in the National Heart, Lung, and Blood Institute (NHLBI) Exome Sequencing Project (ESP). Herein, we consider general two-phase designs, where the outcome can be continuous or discrete, and inexpensive covariates can be continuous and correlated with expensive covariates. We propose a semiparametric approach to regression analysis by approximating the conditional density functions of expensive covariates given inexpensive covariates with B-spline sieves. We devise a computationally efficient and numerically stable EM-algorithm to maximize the sieve likelihood. In addition, we establish the consistency, asymptotic normality, and asymptotic efficiency of the estimators. Furthermore, we demonstrate the superiority of the proposed methods over existing ones through extensive simulation studies. Finally, we present applications to the aforementioned NHLBI ESP.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen
2011-08-16
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less
Energy Efficiency Challenges of 5G Small Cell Networks.
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-05-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks.
Energy Efficiency Challenges of 5G Small Cell Networks
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-01-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. PMID:28757670
Ground Contact Modeling for the Morpheus Test Vehicle Simulation
NASA Technical Reports Server (NTRS)
Cordova, Luis
2014-01-01
The Morpheus vertical test vehicle is an autonomous robotic lander being developed at Johnson Space Center (JSC) to test hazard detection technology. Because the initial ground contact simulation model was not very realistic, it was decided to improve the model without making it too computationally expensive. The first development cycle added capability to define vehicle attachment points (AP) and to keep track of their states in the lander reference frame (LFRAME). These states are used with a spring damper model to compute an AP contact force. The lateral force is then overwritten, if necessary, by the Coulomb static or kinetic friction force. The second development cycle added capability to use the PolySurface class as the contact surface. The class can load CAD data in STL (Stereo Lithography) format, and use the data to compute line of sight (LOS) intercepts. A polygon frame (PFRAME) is computed from the facet intercept normal and used to convert the AP state to PFRAME. Three flat plane tests validate the transitions from kinetic to static, static to kinetic, and vertical impact. The hazardous terrain test will be used to test for visual reasonableness. The improved model is numerically inexpensive, robust, and produces results that are reasonable.
Ground Contact Modeling for the Morpheus Test Vehicle Simulation
NASA Technical Reports Server (NTRS)
Cordova, Luis
2013-01-01
The Morpheus vertical test vehicle is an autonomous robotic lander being developed at Johnson Space Center (JSC) to test hazard detection technology. Because the initial ground contact simulation model was not very realistic, it was decided to improve the model without making it too computationally expensive. The first development cycle added capability to define vehicle attachment points (AP) and to keep track of their states in the lander reference frame (LFRAME). These states are used with a spring damper model to compute an AP contact force. The lateral force is then overwritten, if necessary, by the Coulomb static or kinetic friction force. The second development cycle added capability to use the PolySurface class as the contact surface. The class can load CAD data in STL (Stereo Lithography) format, and use the data to compute line of sight (LOS) intercepts. A polygon frame (PFRAME) is computed from the facet intercept normal and used to convert the AP state to PFRAME. Three flat plane tests validate the transitions from kinetic to static, static to kinetic, and vertical impact. The hazardous terrain test will be used to test for visual reasonableness. The improved model is numerically inexpensive, robust, and produces results that are reasonable.
47 CFR 69.156 - Marketing expenses.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 3 2014-10-01 2014-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...
47 CFR 69.156 - Marketing expenses.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 3 2012-10-01 2012-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...
47 CFR 69.156 - Marketing expenses.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 3 2011-10-01 2011-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...
47 CFR 69.156 - Marketing expenses.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...
47 CFR 69.156 - Marketing expenses.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 3 2013-10-01 2013-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...
Approximate Bayesian computation for spatial SEIR(S) epidemic models.
Brown, Grant D; Porter, Aaron T; Oleson, Jacob J; Hinman, Jessica A
2018-02-01
Approximate Bayesia n Computation (ABC) provides an attractive approach to estimation in complex Bayesian inferential problems for which evaluation of the kernel of the posterior distribution is impossible or computationally expensive. These highly parallelizable techniques have been successfully applied to many fields, particularly in cases where more traditional approaches such as Markov chain Monte Carlo (MCMC) are impractical. In this work, we demonstrate the application of approximate Bayesian inference to spatially heterogeneous Susceptible-Exposed-Infectious-Removed (SEIR) stochastic epidemic models. These models have a tractable posterior distribution, however MCMC techniques nevertheless become computationally infeasible for moderately sized problems. We discuss the practical implementation of these techniques via the open source ABSEIR package for R. The performance of ABC relative to traditional MCMC methods in a small problem is explored under simulation, as well as in the spatially heterogeneous context of the 2014 epidemic of Chikungunya in the Americas. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mattsson, Ann E.
Density Functional Theory (DFT) based Equation of State (EOS) construction is a prominent part of Sandia’s capabilities to support engineering sciences. This capability is based on augmenting experimental data with information gained from computational investigations, especially in those parts of the phase space where experimental data is hard, dangerous, or expensive to obtain. A key part of the success of the Sandia approach is the fundamental science work supporting the computational capability. Not only does this work enhance the capability to perform highly accurate calculations but it also provides crucial insight into the limitations of the computational tools, providing highmore » confidence in the results even where results cannot be, or have not yet been, validated by experimental data. This report concerns the key ingredient of projector augmented-wave (PAW) potentials for use in pseudo-potential computational codes. Using the tools discussed in SAND2012-7389 we assess the standard Vienna Ab-initio Simulation Package (VASP) PAWs for Molybdenum.« less
The Direct Lighting Computation in Global Illumination Methods
NASA Astrophysics Data System (ADS)
Wang, Changyaw Allen
1994-01-01
Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.
BHR equations re-derived with immiscible particle effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwarzkopf, John Dennis; Horwitz, Jeremy A.
2015-05-01
Compressible and variable density turbulent flows with dispersed phase effects are found in many applications ranging from combustion to cloud formation. These types of flows are among the most challenging to simulate. While the exact equations governing a system of particles and fluid are known, computational resources limit the scale and detail that can be simulated in this type of problem. Therefore, a common method is to simulate averaged versions of the flow equations, which still capture salient physics and is relatively less computationally expensive. Besnard developed such a model for variable density miscible turbulence, where ensemble-averaging was applied tomore » the flow equations to yield a set of filtered equations. Besnard further derived transport equations for the Reynolds stresses, the turbulent mass flux, and the density-specific volume covariance, to help close the filtered momentum and continuity equations. We re-derive the exact BHR closure equations which include integral terms owing to immiscible effects. Physical interpretations of the additional terms are proposed along with simple models. The goal of this work is to extend the BHR model to allow for the simulation of turbulent flows where an immiscible dispersed phase is non-trivially coupled with the carrier phase.« less
Do dichromats see colours in this way? Assessing simulation tools without colorimetric measurements.
Lillo Jover, Julio A; Álvaro Llorente, Leticia; Moreira Villegas, Humberto; Melnikova, Anna
2016-11-01
Simulcheck evaluates Colour Simulation Tools (CSTs, they transform colours to mimic those seen by colour vision deficients). Two CSTs (Variantor and Coblis) were used to know if the standard Simulcheck version (direct measurement based, DMB) can be substituted by another (RGB values based) not requiring sophisticated measurement instruments. Ten normal trichromats performed the two psychophysical tasks included in the Simulcheck method. The Pseudoachromatic Stimuli Identification task provided the h uv (hue angle) values of the pseudoachromatic stimuli: colours seen as red or green by normal trichromats but as grey by colour deficient people. The Minimum Achromatic Contrast task was used to compute the L R (relative luminance) values of the pseudoachromatic stimuli. Simulcheck DMB version showed that Variantor was accurate to simulate protanopia but neither Variantor nor Coblis were accurate to simulate deuteranopia. Simulcheck RGB version provided accurate h uv values, so this variable can be adequately estimated when lacking a colorimeter —an expensive and unusual apparatus—. Contrary, the inaccuracy of the L R estimations provided by Simulcheck RGB version makes it advisable to compute this variable from the measurements performed with a photometer, a cheap and easy to find apparatus.
Simulation Testing of Embedded Flight Software
NASA Technical Reports Server (NTRS)
Shahabuddin, Mohammad; Reinholtz, William
2004-01-01
Virtual Real Time (VRT) is a computer program for testing embedded flight software by computational simulation in a workstation, in contradistinction to testing it in its target central processing unit (CPU). The disadvantages of testing in the target CPU include the need for an expensive test bed, the necessity for testers and programmers to take turns using the test bed, and the lack of software tools for debugging in a real-time environment. By virtue of its architecture, most of the flight software of the type in question is amenable to development and testing on workstations, for which there is an abundance of commercially available debugging and analysis software tools. Unfortunately, the timing of a workstation differs from that of a target CPU in a test bed. VRT, in conjunction with closed-loop simulation software, provides a capability for executing embedded flight software on a workstation in a close-to-real-time environment. A scale factor is used to convert between execution time in VRT on a workstation and execution on a target CPU. VRT includes high-resolution operating- system timers that enable the synchronization of flight software with simulation software and ground software, all running on different workstations.
Dynamic modeling of Tampa Bay urban development using parallel computing
Xian, G.; Crane, M.; Steinwand, D.
2005-01-01
Urban land use and land cover has changed significantly in the environs of Tampa Bay, Florida, over the past 50 years. Extensive urbanization has created substantial change to the region's landscape and ecosystems. This paper uses a dynamic urban-growth model, SLEUTH, which applies six geospatial data themes (slope, land use, exclusion, urban extent, transportation, hillside), to study the process of urbanization and associated land use and land cover change in the Tampa Bay area. To reduce processing time and complete the modeling process within an acceptable period, the model is recoded and ported to a Beowulf cluster. The parallel-processing computer system accomplishes the massive amount of computation the modeling simulation requires. SLEUTH calibration process for the Tampa Bay urban growth simulation spends only 10 h CPU time. The model predicts future land use/cover change trends for Tampa Bay from 1992 to 2025. Urban extent is predicted to double in the Tampa Bay watershed between 1992 and 2025. Results show an upward trend of urbanization at the expense of a decline of 58% and 80% in agriculture and forested lands, respectively.
NASA Astrophysics Data System (ADS)
Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.
2017-12-01
In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.
2017-01-01
This study numerically investigates the vortex-induced vibration (VIV) of an elastically mounted rigid cylinder by using Reynolds-averaged Navier–Stokes (RANS) equations with computational fluid dynamic (CFD) tools. CFD analysis is performed for a fixed-cylinder case with Reynolds number (Re) = 104 and for a cylinder that is free to oscillate in the transverse direction and possesses a low mass-damping ratio and Re = 104. Previously, similar studies have been performed with 3-dimensional and comparatively expensive turbulent models. In the current study, the capability and accuracy of the RANS model are validated, and the results of this model are compared with those of detached eddy simulation, direct numerical simulation, and large eddy simulation models. All three response branches and the maximum amplitude are well captured. The 2-dimensional case with the RANS shear–stress transport k-w model, which involves minimal computational cost, is reliable and appropriate for analyzing the characteristics of VIV. PMID:28982172
Crystallographic Lattice Boltzmann Method
Namburi, Manjusha; Krithivasan, Siddharth; Ansumali, Santosh
2016-01-01
Current approaches to Direct Numerical Simulation (DNS) are computationally quite expensive for most realistic scientific and engineering applications of Fluid Dynamics such as automobiles or atmospheric flows. The Lattice Boltzmann Method (LBM), with its simplified kinetic descriptions, has emerged as an important tool for simulating hydrodynamics. In a heterogeneous computing environment, it is often preferred due to its flexibility and better parallel scaling. However, direct simulation of realistic applications, without the use of turbulence models, remains a distant dream even with highly efficient methods such as LBM. In LBM, a fictitious lattice with suitable isotropy in the velocity space is considered to recover Navier-Stokes hydrodynamics in macroscopic limit. The same lattice is mapped onto a cartesian grid for spatial discretization of the kinetic equation. In this paper, we present an inverted argument of the LBM, by making spatial discretization as the central theme. We argue that the optimal spatial discretization for LBM is a Body Centered Cubic (BCC) arrangement of grid points. We illustrate an order-of-magnitude gain in efficiency for LBM and thus a significant progress towards feasibility of DNS for realistic flows. PMID:27251098
Zhang, Honghu
2006-04-01
The acoustical radiosity method is a computationally expensive acoustical simulation algorithm that assumes an enclosure with ideal diffuse reflecting boundaries. Miles observed that for such an enclosure, the sound energy decay of every point on the boundaries will gradually converge to exponential manner with a uniform decay rate. Therefore, the ratio of radiosity between every pair of points on the boundaries will converge to a constant, and the radiosity across the boundaries will approach a fixed distribution during the sound decay process, where radiosity is defined as the acoustic power per unit area leaving (or being received by) a point on a boundary. We call this phenomenon the "relaxation" of the sound field. In this paper, we study the relaxation in rooms of different shapes with different boundary absorptions. Criteria based on the relaxation of the sound field are proposed to terminate the costly and unnecessary radiosity computation in the later phase, which can then be replaced by a fast regression step to speed up the acoustical radiosity simulation.
Statistical models of global Langmuir mixing
NASA Astrophysics Data System (ADS)
Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean
2017-05-01
The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.
Unfolding of Proteins: Thermal and Mechanical Unfolding
NASA Technical Reports Server (NTRS)
Hur, Joe S.; Darve, Eric
2004-01-01
We have employed a Hamiltonian model based on a self-consistent Gaussian appoximation to examine the unfolding process of proteins in external - both mechanical and thermal - force elds. The motivation was to investigate the unfolding pathways of proteins by including only the essence of the important interactions of the native-state topology. Furthermore, if such a model can indeed correctly predict the physics of protein unfolding, it can complement more computationally expensive simulations and theoretical work. The self-consistent Gaussian approximation by Micheletti et al. has been incorporated in our model to make the model mathematically tractable by signi cantly reducing the computational cost. All thermodynamic properties and pair contact probabilities are calculated by simply evaluating the values of a series of Incomplete Gamma functions in an iterative manner. We have compared our results to previous molecular dynamics simulation and experimental data for the mechanical unfolding of the giant muscle protein Titin (1TIT). Our model, especially in light of its simplicity and excellent agreement with experiment and simulation, demonstrates the basic physical elements necessary to capture the mechanism of protein unfolding in an external force field.
Learning a force field for the martensitic phase transformation in Zr
NASA Astrophysics Data System (ADS)
Zong, Hongxiang; Pilania, Ghanshyam; Ramprasad, Rampi; Lookman, Turab
Atomic simulations provide an effective means to understand the underlying physics of martensitic transformations under extreme conditions. However, this is still a challenge for certain phase transforming metals due to the lack of an accurate classical force field. Quantum molecular dynamics (QMD) simulations are accurate but expensive. During the course of QMD simulations, similar configurations are constantly visited and revisited. Machine Learning can effectively learn from past visits and, therefore, eliminate such redundancies. In this talk, we will discuss the development of a hybrid ML-QMD method in which on-demand, on-the-fly quantum mechanical (QM) calculations are performed to accelerate calculations of interatomic forces at much lower computational costs. Using Zirconium as a model system for which accurate atomisctic potentials are currently unvailable we will demonstrate the feasibility and effectiveness of our approach. Specifically, the computed structural phase transformation behavior within the ML-QMD approach will be compared with available experimental results. Furthermore, results on phonons, stacking fault energies, and activation barriers for the homogeneous martensitic transformation in Zr will be presented.
Simplified Models for Accelerated Structural Prediction of Conjugated Semiconducting Polymers
Henry, Michael M.; Jones, Matthew L.; Oosterhout, Stefan D.; ...
2017-11-08
We perform molecular dynamics simulations of poly(benzodithiophene-thienopyrrolodione) (BDT-TPD) oligomers in order to evaluate the accuracy with which unoptimized molecular models can predict experimentally characterized morphologies. The predicted morphologies are characterized using simulated grazing-incidence X-ray scattering (GIXS) and compared to the experimental scattering patterns. We find that approximating the aromatic rings in BDT-TPD with rigid bodies, rather than combinations of bond, angle, and dihedral constraints, results in 14% lower computational cost and provides nearly equivalent structural predictions compared to the flexible model case. The predicted glass transition temperature of BDT-TPD (410 +/- 32 K) is found to be in agreement withmore » experiments. Predicted morphologies demonstrate short-range structural order due to stacking of the chain backbones (p-p stacking around 3.9 A), and long-range spatial correlations due to the self-organization of backbone stacks into 'ribbons' (lamellar ordering around 20.9 A), representing the best-to-date computational predictions of structure of complex conjugated oligomers. We find that expensive simulated annealing schedules are not needed to predict experimental structures here, with instantaneous quenches providing nearly equivalent predictions at a fraction of the computational cost of annealing. We therefore suggest utilizing rigid bodies and fast cooling schedules for high-throughput screening studies of semiflexible polymers and oligomers to utilize their significant computational benefits where appropriate.« less
Simplified Models for Accelerated Structural Prediction of Conjugated Semiconducting Polymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henry, Michael M.; Jones, Matthew L.; Oosterhout, Stefan D.
We perform molecular dynamics simulations of poly(benzodithiophene-thienopyrrolodione) (BDT-TPD) oligomers in order to evaluate the accuracy with which unoptimized molecular models can predict experimentally characterized morphologies. The predicted morphologies are characterized using simulated grazing-incidence X-ray scattering (GIXS) and compared to the experimental scattering patterns. We find that approximating the aromatic rings in BDT-TPD with rigid bodies, rather than combinations of bond, angle, and dihedral constraints, results in 14% lower computational cost and provides nearly equivalent structural predictions compared to the flexible model case. The predicted glass transition temperature of BDT-TPD (410 +/- 32 K) is found to be in agreement withmore » experiments. Predicted morphologies demonstrate short-range structural order due to stacking of the chain backbones (p-p stacking around 3.9 A), and long-range spatial correlations due to the self-organization of backbone stacks into 'ribbons' (lamellar ordering around 20.9 A), representing the best-to-date computational predictions of structure of complex conjugated oligomers. We find that expensive simulated annealing schedules are not needed to predict experimental structures here, with instantaneous quenches providing nearly equivalent predictions at a fraction of the computational cost of annealing. We therefore suggest utilizing rigid bodies and fast cooling schedules for high-throughput screening studies of semiflexible polymers and oligomers to utilize their significant computational benefits where appropriate.« less
Real-time computing platform for spiking neurons (RT-spike).
Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael
2006-07-01
A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.
The Influence of Inlet Asymmetry on Steam Turbine Exhaust Hood Flows.
Burton, Zoe; Hogg, Simon; Ingram, Grant L
2014-04-01
It has been widely recognized for some decades that it is essential to accurately represent the strong coupling between the last stage blades (LSB) and the diffuser inlet, in order to correctly capture the flow through the exhaust hoods of steam turbine low pressure cylinders. This applies to any form of simulation of the flow, i.e., numerical or experimental. The exhaust hood flow structure is highly three-dimensional and appropriate coupling will enable the important influence of this asymmetry to be transferred to the rotor. This, however, presents challenges as the calculation size grows rapidly when the full annulus is calculated. The size of the simulation means researchers are constantly searching for methods to reduce the computational effort without compromising solution accuracy. However, this can result in excessive computational demands in numerical simulations. Unsteady full-annulus CFD calculation will remain infeasible for routine design calculations for the foreseeable future. More computationally efficient methods for coupling the unsteady rotor flow to the hood flow are required that bring computational expense within realizable limits while still maintaining sufficient accuracy for meaningful design calculations. Research activity in this area is focused on developing new methods and techniques to improve accuracy and reduce computational expense. A novel approach for coupling the turbine last stage to the exhaust hood employing the nonlinear harmonic (NLH) method is presented in this paper. The generic, IP free, exhaust hood and last stage blade geometries from Burton et al. (2012. "A Generic Low Pressure Exhaust Diffuser for Steam Turbine Research,"Proceedings of the ASME Turbo Expo, Copenhagen, Denmark, Paper No. GT2012-68485) that are representative of modern designs, are used to demonstrate the effectiveness of the method. This is achieved by comparing results obtained with the NLH to those obtained with a more conventional mixing-plane approach. The results show that the circumferential asymmetry can be successfully transferred in both directions between the exhaust hood flow and that through the LSB, by using the NLH. This paper also suggests that for exhaust hoods of generous axial length, little change in C p is observed when the circumferential asymmetry is captured. However, the predicted flow structure is significantly different, which will influence the design and placement of the exhaust hood internal "furniture."
Rupp, K; Jungemann, C; Hong, S-M; Bina, M; Grasser, T; Jüngel, A
The Boltzmann transport equation is commonly considered to be the best semi-classical description of carrier transport in semiconductors, providing precise information about the distribution of carriers with respect to time (one dimension), location (three dimensions), and momentum (three dimensions). However, numerical solutions for the seven-dimensional carrier distribution functions are very demanding. The most common solution approach is the stochastic Monte Carlo method, because the gigabytes of memory requirements of deterministic direct solution approaches has not been available until recently. As a remedy, the higher accuracy provided by solutions of the Boltzmann transport equation is often exchanged for lower computational expense by using simpler models based on macroscopic quantities such as carrier density and mean carrier velocity. Recent developments for the deterministic spherical harmonics expansion method have reduced the computational cost for solving the Boltzmann transport equation, enabling the computation of carrier distribution functions even for spatially three-dimensional device simulations within minutes to hours. We summarize recent progress for the spherical harmonics expansion method and show that small currents, reasonable execution times, and rare events such as low-frequency noise, which are all hard or even impossible to simulate with the established Monte Carlo method, can be handled in a straight-forward manner. The applicability of the method for important practical applications is demonstrated for noise simulation, small-signal analysis, hot-carrier degradation, and avalanche breakdown.
Machine learning from computer simulations with applications in rail vehicle dynamics
NASA Astrophysics Data System (ADS)
Taheri, Mehdi; Ahmadian, Mehdi
2016-05-01
The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.
Kilinc, Deniz; Demir, Alper
2017-08-01
The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.
Adaptive resolution simulation of oligonucleotides
NASA Astrophysics Data System (ADS)
Netz, Paulo A.; Potestio, Raffaello; Kremer, Kurt
2016-12-01
Nucleic acids are characterized by a complex hierarchical structure and a variety of interaction mechanisms with other molecules. These features suggest the need of multiscale simulation methods in order to grasp the relevant physical properties of deoxyribonucleic acid (DNA) and RNA using in silico experiments. Here we report an implementation of a dual-resolution modeling of a DNA oligonucleotide in physiological conditions; in the presented setup only the nucleotide molecule and the solvent and ions in its proximity are described at the atomistic level; in contrast, the water molecules and ions far from the DNA are represented as computationally less expensive coarse-grained particles. Through the analysis of several structural and dynamical parameters, we show that this setup reliably reproduces the physical properties of the DNA molecule as observed in reference atomistic simulations. These results represent a first step towards a realistic multiscale modeling of nucleic acids and provide a quantitatively solid ground for their simulation using dual-resolution methods.
Telehealth Innovations in Health Education and Training
De, Suvranu; Hall, Richard W.; Johansen, Edward; Meglan, Dwight; Peng, Grace C.Y.
2010-01-01
Abstract Telehealth applications are increasingly important in many areas of health education and training. In addition, they will play a vital role in biomedical research and research training by facilitating remote collaborations and providing access to expensive/remote instrumentation. In order to fulfill their true potential to leverage education, training, and research activities, innovations in telehealth applications should be fostered across a range of technology fronts, including online, on-demand computational models for simulation; simplified interfaces for software and hardware; software frameworks for simulations; portable telepresence systems; artificial intelligence applications to be applied when simulated human patients are not options; and the development of more simulator applications. This article presents the results of discussion on potential areas of future development, barries to overcome, and suggestions to translate the promise of telehealth applications into a transformed environment of training, education, and research in the health sciences. PMID:20155874
GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.
Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik
2008-03-01
Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.
Onyx-Advanced Aeropropulsion Simulation Framework Created
NASA Technical Reports Server (NTRS)
Reed, John A.
2001-01-01
The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.
Modeling of Tool-Tissue Interactions for Computer-Based Surgical Simulation: A Literature Review
Misra, Sarthak; Ramesh, K. T.; Okamura, Allison M.
2009-01-01
Surgical simulators present a safe and potentially effective method for surgical training, and can also be used in robot-assisted surgery for pre- and intra-operative planning. Accurate modeling of the interaction between surgical instruments and organs has been recognized as a key requirement in the development of high-fidelity surgical simulators. Researchers have attempted to model tool-tissue interactions in a wide variety of ways, which can be broadly classified as (1) linear elasticity-based, (2) nonlinear (hyperelastic) elasticity-based finite element (FE) methods, and (3) other techniques that not based on FE methods or continuum mechanics. Realistic modeling of organ deformation requires populating the model with real tissue data (which are difficult to acquire in vivo) and simulating organ response in real time (which is computationally expensive). Further, it is challenging to account for connective tissue supporting the organ, friction, and topological changes resulting from tool-tissue interactions during invasive surgical procedures. Overcoming such obstacles will not only help us to model tool-tissue interactions in real time, but also enable realistic force feedback to the user during surgical simulation. This review paper classifies the existing research on tool-tissue interactions for surgical simulators specifically based on the modeling techniques employed and the kind of surgical operation being simulated, in order to inform and motivate future research on improved tool-tissue interaction models. PMID:20119508
NASA Astrophysics Data System (ADS)
Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia
2018-03-01
Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.
Turbulence modeling for Francis turbine water passages simulation
NASA Astrophysics Data System (ADS)
Maruzewski, P.; Hayashi, H.; Munch, C.; Yamaishi, K.; Hashii, T.; Mombelli, H. P.; Sugow, Y.; Avellan, F.
2010-08-01
The applications of Computational Fluid Dynamics, CFD, to hydraulic machines life require the ability to handle turbulent flows and to take into account the effects of turbulence on the mean flow. Nowadays, Direct Numerical Simulation, DNS, is still not a good candidate for hydraulic machines simulations due to an expensive computational time consuming. Large Eddy Simulation, LES, even, is of the same category of DNS, could be an alternative whereby only the small scale turbulent fluctuations are modeled and the larger scale fluctuations are computed directly. Nevertheless, the Reynolds-Averaged Navier-Stokes, RANS, model have become the widespread standard base for numerous hydraulic machine design procedures. However, for many applications involving wall-bounded flows and attached boundary layers, various hybrid combinations of LES and RANS are being considered, such as Detached Eddy Simulation, DES, whereby the RANS approximation is kept in the regions where the boundary layers are attached to the solid walls. Furthermore, the accuracy of CFD simulations is highly dependent on the grid quality, in terms of grid uniformity in complex configurations. Moreover any successful structured and unstructured CFD codes have to offer a wide range to the variety of classic RANS model to hybrid complex model. The aim of this study is to compare the behavior of turbulent simulations for both structured and unstructured grids topology with two different CFD codes which used the same Francis turbine. Hence, the study is intended to outline the encountered discrepancy for predicting the wake of turbine blades by using either the standard k-epsilon model, or the standard k-epsilon model or the SST shear stress model in a steady CFD simulation. Finally, comparisons are made with experimental data from the EPFL Laboratory for Hydraulic Machines reduced scale model measurements.
An algorithm for fast elastic wave simulation using a vectorized finite difference operator
NASA Astrophysics Data System (ADS)
Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna
2018-07-01
Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.
Fast simulation of yttrium-90 bremsstrahlung photons with GATE.
Rault, Erwann; Staelens, Steven; Van Holen, Roel; De Beenhouwer, Jan; Vandenberghe, Stefaan
2010-06-01
Multiple investigators have recently reported the use of yttrium-90 (90Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging for the dosimetry of targeted radionuclide therapies. Because Monte Carlo (MC) simulations are useful for studying SPECT imaging, this study investigates the MC simulation of 90Y bremsstrahlung photons in SPECT. To overcome the computationally expensive simulation of electrons, the authors propose a fast way to simulate the emission of 90Y bremsstrahlung photons based on prerecorded bremsstrahlung photon probability density functions (PDFs). The accuracy of bremsstrahlung photon simulation is evaluated in two steps. First, the validity of the fast bremsstrahlung photon generator is checked. To that end, fast and analog simulations of photons emitted from a 90Y point source in a water phantom are compared. The same setup is then used to verify the accuracy of the bremsstrahlung photon simulations, comparing the results obtained with PDFs generated from both simulated and measured data to measurements. In both cases, the energy spectra and point spread functions of the photons detected in a scintillation camera are used. Results show that the fast simulation method is responsible for a 5% overestimation of the low-energy fluence (below 75 keV) of the bremsstrahlung photons detected using a scintillation camera. The spatial distribution of the detected photons is, however, accurately reproduced with the fast method and a computational acceleration of approximately 17-fold is achieved. When measured PDFs are used in the simulations, the simulated energy spectrum of photons emitted from a point source of 90Y in a water phantom and detected in a scintillation camera closely approximates the measured spectrum. The PSF of the photons imaged in the 50-300 keV energy window is also accurately estimated with a 12.4% underestimation of the full width at half maximum and 4.5% underestimation of the full width at tenth maximum. Despite its limited accuracy, the fast bremsstrahlung photon generator is well suited for the simulation of bremsstrahlung photons emitted in large homogeneous organs, such as the liver, and detected in a scintillation camera. The computational acceleration makes it very useful for future investigations of 90Y bremsstrahlung SPECT imaging.
Drawert, Brian; Engblom, Stefan; Hellander, Andreas
2012-06-22
Experiments in silico using stochastic reaction-diffusion models have emerged as an important tool in molecular systems biology. Designing computational software for such applications poses several challenges. Firstly, realistic lattice-based modeling for biological applications requires a consistent way of handling complex geometries, including curved inner- and outer boundaries. Secondly, spatiotemporal stochastic simulations are computationally expensive due to the fast time scales of individual reaction- and diffusion events when compared to the biological phenomena of actual interest. We therefore argue that simulation software needs to be both computationally efficient, employing sophisticated algorithms, yet in the same time flexible in order to meet present and future needs of increasingly complex biological modeling. We have developed URDME, a flexible software framework for general stochastic reaction-transport modeling and simulation. URDME uses Unstructured triangular and tetrahedral meshes to resolve general geometries, and relies on the Reaction-Diffusion Master Equation formalism to model the processes under study. An interface to a mature geometry and mesh handling external software (Comsol Multiphysics) provides for a stable and interactive environment for model construction. The core simulation routines are logically separated from the model building interface and written in a low-level language for computational efficiency. The connection to the geometry handling software is realized via a Matlab interface which facilitates script computing, data management, and post-processing. For practitioners, the software therefore behaves much as an interactive Matlab toolbox. At the same time, it is possible to modify and extend URDME with newly developed simulation routines. Since the overall design effectively hides the complexity of managing the geometry and meshes, this means that newly developed methods may be tested in a realistic setting already at an early stage of development. In this paper we demonstrate, in a series of examples with high relevance to the molecular systems biology community, that the proposed software framework is a useful tool for both practitioners and developers of spatial stochastic simulation algorithms. Through the combined efforts of algorithm development and improved modeling accuracy, increasingly complex biological models become feasible to study through computational methods. URDME is freely available at http://www.urdme.org.
Pham, Tuan Anh; Ogitsu, Tadashi; Lau, Edmond Y; Schwegler, Eric
2016-10-21
Establishing an accurate and predictive computational framework for the description of complex aqueous solutions is an ongoing challenge for density functional theory based first-principles molecular dynamics (FPMD) simulations. In this context, important advances have been made in recent years, including the development of sophisticated exchange-correlation functionals. On the other hand, simulations based on simple generalized gradient approximation (GGA) functionals remain an active field, particularly in the study of complex aqueous solutions due to a good balance between the accuracy, computational expense, and the applicability to a wide range of systems. Such simulations are often performed at elevated temperatures to artificially "correct" for GGA inaccuracies in the description of liquid water; however, a detailed understanding of how the choice of temperature affects the structure and dynamics of other components, such as solvated ions, is largely unknown. To address this question, we carried out a series of FPMD simulations at temperatures ranging from 300 to 460 K for liquid water and three representative aqueous solutions containing solvated Na + , K + , and Cl - ions. We show that simulations at 390-400 K with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional yield water structure and dynamics in good agreement with experiments at ambient conditions. Simultaneously, this computational setup provides ion solvation structures and ion effects on water dynamics consistent with experiments. Our results suggest that an elevated temperature around 390-400 K with the PBE functional can be used for the description of structural and dynamical properties of liquid water and complex solutions with solvated ions at ambient conditions.
Epstein, R H; Dexter, F
2000-08-01
Operating room (OR) scheduling information systems can decrease perioperative labor costs. Material management information systems can decrease perioperative inventory costs. We used computer simulation to investigate whether using the OR schedule to trigger purchasing of perioperative supplies is likely to further decrease perioperative inventory costs, as compared with using sophisticated, stand-alone material management inventory control. Although we designed the simulations to favor financially linking the information systems, we found that this strategy would be expected to decrease inventory costs substantively only for items of high price ($1000 each) and volume (>1000 used each year). Because expensive items typically have different models and sizes, each of which is used by a hospital less often than this, for almost all items there will be no benefit to making daily adjustments to the order volume based on booked cases. We conclude that, in a hospital with a sophisticated material management information system, OR managers will probably achieve greater cost reductions from focusing on negotiating less expensive purchase prices for items than on trying to link the OR information system with the hospital's material management information system to achieve just-in-time inventory control. In a hospital with a sophisticated material management information system, operating room managers will probably achieve greater cost reductions from focusing on negotiating less expensive purchase prices for items than on trying to link the operating room information system with the hospital's material management information system to achieve just-in-time inventory control.
Decoupled CFD-based optimization of efficiency and cavitation performance of a double-suction pump
NASA Astrophysics Data System (ADS)
Škerlavaj, A.; Morgut, M.; Jošt, D.; Nobile, E.
2017-04-01
In this study the impeller geometry of a double-suction pump ensuring the best performances in terms of hydraulic efficiency and reluctance of cavitation is determined using an optimization strategy, which was driven by means of the modeFRONTIER optimization platform. The different impeller shapes (designs) are modified according to the optimization parameters and tested with a computational fluid dynamics (CFD) software, namely ANSYS CFX. The simulations are performed using a decoupled approach, where only the impeller domain region is numerically investigated for computational convenience. The flow losses in the volute are estimated on the base of the velocity distribution at the impeller outlet. The best designs are then validated considering the computationally more expensive full geometry CFD model. The overall results show that the proposed approach is suitable for quick impeller shape optimization.
CASL VMA Milestone Report FY16 (L3:VMA.VUQ.P13.08): Westinghouse Mixing with STAR-CCM+
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilkey, Lindsay Noelle
2016-09-30
STAR-CCM+ (STAR) is a high-resolution computational fluid dynamics (CFD) code developed by CD-adapco. STAR includes validated physics models and a full suite of turbulence models including ones from the k-ε and k-ω families. STAR is currently being developed to be able to do two phase flows, but the current focus of the software is single phase flow. STAR can use imported meshes or use the built in meshing software to create computation domains for CFD. Since the solvers generally require a fine mesh for good computational results, the meshes used with STAR tend to number in the millions of cells,more » with that number growing with simulation and geometry complexity. The time required to model the flow of a full 5x5 Mixing Vane Grid Assembly (5x5MVG) in the current STAR configuration is on the order of hours, and can be very computationally expensive. COBRA-TF (CTF) is a low-resolution subchannel code that can be trained using high fidelity data from STAR. CTF does not have turbulence models and instead uses a turbulent mixing coefficient β. With a properly calibrated β, CTF can be used a low-computational cost alternative to expensive full CFD calculations performed with STAR. During the Hi2Lo work with CTF and STAR, STAR-CCM+ will be used to calibrate β and to provide high-resolution results that can be used in the place of and in addition to experimental results to reduce the uncertainty in the CTF results.« less
48 CFR 227.7103-6 - Contract clauses.
Code of Federal Regulations, 2013 CFR
2013-10-01
... private expense). Do not use the clause when the only deliverable items are computer software or computer software documentation (see 227.72), commercial items developed exclusively at private expense (see 227... the clause in architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013...
48 CFR 227.7103-6 - Contract clauses.
Code of Federal Regulations, 2014 CFR
2014-10-01
... private expense). Do not use the clause when the only deliverable items are computer software or computer software documentation (see 227.72), commercial items developed exclusively at private expense (see 227... the clause in architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013...
A universal preconditioner for simulating condensed phase materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Packwood, David; Ortner, Christoph, E-mail: c.ortner@warwick.ac.uk; Kermode, James, E-mail: j.r.kermode@warwick.ac.uk
2016-04-28
We introduce a universal sparse preconditioner that accelerates geometry optimisation and saddle point search tasks that are common in the atomic scale simulation of materials. Our preconditioner is based on the neighbourhood structure and we demonstrate the gain in computational efficiency in a wide range of materials that include metals, insulators, and molecular solids. The simple structure of the preconditioner means that the gains can be realised in practice not only when using expensive electronic structure models but also for fast empirical potentials. Even for relatively small systems of a few hundred atoms, we observe speedups of a factor ofmore » two or more, and the gain grows with system size. An open source Python implementation within the Atomic Simulation Environment is available, offering interfaces to a wide range of atomistic codes.« less
An efficient hybrid method for stochastic reaction-diffusion biochemical systems with delay
NASA Astrophysics Data System (ADS)
Sayyidmousavi, Alireza; Ilie, Silvana
2017-12-01
Many chemical reactions, such as gene transcription and translation in living cells, need a certain time to finish once they are initiated. Simulating stochastic models of reaction-diffusion systems with delay can be computationally expensive. In the present paper, a novel hybrid algorithm is proposed to accelerate the stochastic simulation of delayed reaction-diffusion systems. The delayed reactions may be of consuming or non-consuming delay type. The algorithm is designed for moderately stiff systems in which the events can be partitioned into slow and fast subsets according to their propensities. The proposed algorithm is applied to three benchmark problems and the results are compared with those of the delayed Inhomogeneous Stochastic Simulation Algorithm. The numerical results show that the new hybrid algorithm achieves considerable speed-up in the run time and very good accuracy.
Computer simulations of space-borne meteorological systems on the CYBER 205
NASA Technical Reports Server (NTRS)
Halem, M.
1984-01-01
Because of the extreme expense involved in developing and flight testing meteorological instruments, an extensive series of numerical modeling experiments to simulate the performance of meteorological observing systems were performed on CYBER 205. The studies compare the relative importance of different global measurements of individual and composite systems of the meteorological variables needed to determine the state of the atmosphere. The assessments are made in terms of the systems ability to improve 12 hour global forecasts. Each experiment involves the daily assimilation of simulated data that is obtained from a data set called nature. This data is obtained from two sources: first, a long two-month general circulation integration with the GLAS 4th Order Forecast Model and second, global analysis prepared by the National Meteorological Center, NOAA, from the current observing systems twice daily.
Ohto, Tatsuhiko; Usui, Kota; Hasegawa, Taisuke; Bonn, Mischa; Nagata, Yuki
2015-09-28
Interfacial water structures have been studied intensively by probing the O-H stretch mode of water molecules using sum-frequency generation (SFG) spectroscopy. This surface-specific technique is finding increasingly widespread use, and accordingly, computational approaches to calculate SFG spectra using molecular dynamics (MD) trajectories of interfacial water molecules have been developed and employed to correlate specific spectral signatures with distinct interfacial water structures. Such simulations typically require relatively long (several nanoseconds) MD trajectories to allow reliable calculation of the SFG response functions through the dipole moment-polarizability time correlation function. These long trajectories limit the use of computationally expensive MD techniques such as ab initio MD and centroid MD simulations. Here, we present an efficient algorithm determining the SFG response from the surface-specific velocity-velocity correlation function (ssVVCF). This ssVVCF formalism allows us to calculate SFG spectra using a MD trajectory of only ∼100 ps, resulting in the substantial reduction of the computational costs, by almost an order of magnitude. We demonstrate that the O-H stretch SFG spectra at the water-air interface calculated by using the ssVVCF formalism well reproduce those calculated by using the dipole moment-polarizability time correlation function. Furthermore, we applied this ssVVCF technique for computing the SFG spectra from the ab initio MD trajectories with various density functionals. We report that the SFG responses computed from both ab initio MD simulations and MD simulations with an ab initio based force field model do not show a positive feature in its imaginary component at 3100 cm(-1).
Ocean-Atmosphere Coupled Model Simulations of Precipitation in the Central Andes
NASA Technical Reports Server (NTRS)
Nicholls, Stephen D.; Mohr, Karen I.
2015-01-01
The meridional extent and complex orography of the South American continent contributes to a wide diversity of climate regimes ranging from hyper-arid deserts to tropical rainforests to sub-polar highland regions. In addition, South American meteorology and climate are also made further complicated by ENSO, a powerful coupled ocean-atmosphere phenomenon. Modelling studies in this region have typically resorted to either atmospheric mesoscale or atmosphere-ocean coupled global climate models. The latter offers full physics and high spatial resolution, but it is computationally inefficient typically lack an interactive ocean, whereas the former offers high computational efficiency and ocean-atmosphere coupling, but it lacks adequate spatial and temporal resolution to adequate resolve the complex orography and explicitly simulate precipitation. Explicit simulation of precipitation is vital in the Central Andes where rainfall rates are light (0.5-5 mm hr-1), there is strong seasonality, and most precipitation is associated with weak mesoscale-organized convection. Recent increases in both computational power and model development have led to the advent of coupled ocean-atmosphere mesoscale models for both weather and climate study applications. These modelling systems, while computationally expensive, include two-way ocean-atmosphere coupling, high resolution, and explicit simulation of precipitation. In this study, we use the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST), a fully-coupled mesoscale atmosphere-ocean modeling system. Previous work has shown COAWST to reasonably simulate the entire 2003-2004 wet season (Dec-Feb) as validated against both satellite and model analysis data when ECMWF interim analysis data were used for boundary conditions on a 27-9-km grid configuration (Outer grid extent: 60.4S to 17.7N and 118.6W to 17.4W).
Coded-aperture Compton camera for gamma-ray imaging
NASA Astrophysics Data System (ADS)
Farber, Aaron M.
This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.
Petascale supercomputing to accelerate the design of high-temperature alloys
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; ...
2017-10-25
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
NASA Astrophysics Data System (ADS)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen
2017-12-01
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.
Analysis of a Multi-Fidelity Surrogate for Handling Real Gas Equations of State
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S.
2017-06-01
The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the detonation products of the explosive must be treated as real gas while the ideal gas equation of state is used for the surrounding air. As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both state equations must be satisfied. One of the most accurate, yet computationally expensive, methods to handle this problem is an algorithm that iterates between both equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work aims to use a multi-fidelity surrogate model to replace this process. A Kriging model is used to produce a curve fit which interpolates selected data from the iterative algorithm using Bayesian statistics. We study the model performance with respect to the iterative method in simulations using a finite volume code. The model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel approach. Also, optimizing the combination of model accuracy and computational speed through the choice of sampling points is explained. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program as a Cooperative Agreement under the Predictive Science Academic Alliance Program under Contract No. DE-NA0002378.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
NASA Technical Reports Server (NTRS)
1994-01-01
A NASA contract led to the development of faster and more energy efficient semiconductor materials for digital integrated circuits. Gallium arsenide (GaAs) conducts electrons 4-6 times faster than silicon and uses less power at frequencies above 100-150 megahertz. However, the material is expensive, brittle, fragile and has lacked computer automated engineering tools to solve this problem. Systems & Processes Engineering Corporation (SPEC) developed a series of GaAs cell libraries for cell layout, design rule checking, logic synthesis, placement and routing, simulation and chip assembly. The system is marketed by Compare Design Automation.
Interactive physically-based sound simulation
NASA Astrophysics Data System (ADS)
Raghuvanshi, Nikunj
The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to fit within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes.
NASA Astrophysics Data System (ADS)
Bonnet, M.; Collino, F.; Demaldent, E.; Imperiale, A.; Pesudo, L.
2018-05-01
Ultrasonic Non-Destructive Testing (US NDT) has become widely used in various fields of applications to probe media. Exploiting the surface measurements of the ultrasonic incident waves echoes after their propagation through the medium, it allows to detect potential defects (cracks and inhomogeneities) and characterize the medium. The understanding and interpretation of those experimental measurements is performed with the help of numerical modeling and simulations. However, classical numerical methods can become computationally very expensive for the simulation of wave propagation in the high frequency regime. On the other hand, asymptotic techniques are better suited to model high frequency scattering over large distances but nevertheless do not allow accurate simulation of complex diffraction phenomena. Thus, neither numerical nor asymptotic methods can individually solve high frequency diffraction problems in large media, as those involved in UNDT controls, both quickly and accurately, but their advantages and limitations are complementary. Here we propose a hybrid strategy coupling the surface integral equation method and the ray tracing method to simulate high frequency diffraction under speed and accuracy constraints. This strategy is general and applicable to simulate diffraction phenomena in acoustic or elastodynamic media. We provide its implementation and investigate its performances for the 2D acoustic diffraction problem. The main features of this hybrid method are described and results of 2D computational experiments discussed.
Next Generation Extended Lagrangian Quantum-based Molecular Dynamics
NASA Astrophysics Data System (ADS)
Negre, Christian
2017-06-01
A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.
24 CFR 990.165 - Computation of project expense level (PEL).
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Computation of project expense level (PEL). 990.165 Section 990.165 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF...
Solvers for the Cardiac Bidomain Equations
Vigmond, E.J.; Weber dos Santos, R.; Prassl, A.J.; Deo, M.; Plank, G.
2010-01-01
The bidomain equations are widely used for the simulation of electrical activity in cardiac tissue. They are especially important for accurately modelling extracellular stimulation, as evidenced by their prediction of virtual electrode polarization before experimental verification. However, solution of the equations is computationally expensive due to the fine spatial and temporal discretization needed. This limits the size and duration of the problem which can be modeled. Regardless of the specific form into which they are cast, the computational bottleneck becomes the repeated solution of a large, linear system. The purpose of this review is to give an overview of the equations, and the methods by which they have been solved. Of particular note are recent developments in multigrid methods, which have proven to be the most efficient. PMID:17900668
Efficient approach to obtain free energy gradient using QM/MM MD simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asada, Toshio; Koseki, Shiro; The Research Institute for Molecular Electronic Devices
2015-12-31
The efficient computational approach denoted as charge and atom dipole response kernel (CDRK) model to consider polarization effects of the quantum mechanical (QM) region is described using the charge response and the atom dipole response kernels for free energy gradient (FEG) calculations in the quantum mechanical/molecular mechanical (QM/MM) method. CDRK model can reasonably reproduce energies and also energy gradients of QM and MM atoms obtained by expensive QM/MM calculations in a drastically reduced computational time. This model is applied on the acylation reaction in hydrated trypsin-BPTI complex to optimize the reaction path on the free energy surface by means ofmore » FEG and the nudged elastic band (NEB) method.« less
Fluid Simulation in the Movies: Navier and Stokes Must Be Circulating in Their Graves
NASA Astrophysics Data System (ADS)
Tessendorf, Jerry
2010-11-01
Fluid simulations based on the Incompressible Navier-Stokes equations are commonplace computer graphics tools in the visual effects industry. These simulations mostly come from custom C++ code written by the visual effects companies. Their significant impact in films was recognized in 2008 with Academy Awards to four visual effects companies for their technical achievement. However artists are not fluid dynamicists, and fluid dynamics simulations are expensive to use in a deadline-driven production environment. As a result, the simulation algorithms are modified to limit the computational resources, adapt them to production workflow, and to respect the client's vision of the film plot. Eulerian solvers on fixed rectangular grids use a mix of momentum solvers, including Semi-Lagrangian, FLIP, and QUICK. Incompressibility is enforced with FFT, Conjugate Gradient, and Multigrid methods. For liquids, a levelset field tracks the free surface. Smooth Particle Hydrodynamics is also used, and is part of a hybrid Eulerian-SPH liquid simulator. Artists use all of them in a mix and match fashion to control the appearance of the simulation. Specially designed forces and boundary conditions control the flow. The simulation can be an input to artistically driven procedural particle simulations that enhance the flow with more detail and drama. Post-simulation processing increases the visual detail beyond the grid resolution. Ultimately, iterative simulation methods that fit naturally in the production workflow are extremely desirable but not yet successful. Results from some efforts for iterative methods are shown, and other approaches motivated by the history of production are proposed.
A Percolation Model for Fracking
NASA Astrophysics Data System (ADS)
Norris, J. Q.; Turcotte, D. L.; Rundle, J. B.
2014-12-01
Developments in fracking technology have enabled the recovery of vast reserves of oil and gas; yet, there is very little publicly available scientific research on fracking. Traditional reservoir simulator models for fracking are computationally expensive, and require many hours on a supercomputer to simulate a single fracking treatment. We have developed a computationally inexpensive percolation model for fracking that can be used to understand the processes and risks associated with fracking. In our model, a fluid is injected from a single site and a network of fractures grows from the single site. The fracture network grows in bursts, the failure of a relatively strong bond followed by the failure of a series of relatively weak bonds. These bursts display similarities to micro seismic events observed during a fracking treatment. The bursts follow a power-law (Gutenburg-Richter) frequency-size distribution and have growth rates similar to observed earthquake moment rates. These are quantifiable features that can be compared to observed microseismicity to help understand the relationship between observed microseismicity and the underlying fracture network.
An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padgett, Jill M. A.; Ilie, Silvana, E-mail: silvana@ryerson.ca
2016-03-15
Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating themore » solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.« less
Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues
NASA Astrophysics Data System (ADS)
Chakravarthy, Srinivas R.; Rumyantsev, Alexander
2018-03-01
Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.
NASA Astrophysics Data System (ADS)
Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.
2017-12-01
Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.
Integrating TITAN2D Geophysical Mass Flow Model with GIS
NASA Astrophysics Data System (ADS)
Namikawa, L. M.; Renschler, C.
2005-12-01
TITAN2D simulates geophysical mass flows over natural terrain using depth-averaged granular flow models and requires spatially distributed parameter values to solve differential equations. Since a Geographical Information System (GIS) main task is integration and manipulation of data covering a geographic region, the use of a GIS for implementation of simulation of complex, physically-based models such as TITAN2D seems a natural choice. However, simulation of geophysical flows requires computationally intensive operations that need unique optimizations, such as adaptative grids and parallel processing. Thus GIS developed for general use cannot provide an effective environment for complex simulations and the solution is to develop a linkage between GIS and simulation model. The present work presents the solution used for TITAN2D where data structure of a GIS is accessed by simulation code through an Application Program Interface (API). GRASS is an open source GIS with published data formats thus GRASS data structure was selected. TITAN2D requires elevation, slope, curvature, and base material information at every cell to be computed. Results from simulation are visualized by a system developed to handle the large amount of output data and to support a realistic dynamic 3-D display of flow dynamics, which requires elevation and texture, usually from a remote sensor image. Data required by simulation is in raster format, using regular rectangular grids. GRASS format for regular grids is based on data file (binary file storing data either uncompressed or compressed by grid row), header file (text file, with information about georeferencing, data extents, and grid cell resolution), and support files (text files, with information about color table and categories names). The implemented API provides access to original data (elevation, base material, and texture from imagery) and slope and curvature derived from elevation data. From several existing methods to estimate slope and curvature from elevation, the selected one is based on estimation by a third-order finite difference method, which has shown to perform better or with minimal difference when compared to more computationally expensive methods. Derivatives are estimated using weighted sum of 8 grid neighbor values. The method was implemented and simulation results compared to derivatives estimated by a simplified version of the method (uses only 4 neighbor cells) and proven to perform better. TITAN2D uses an adaptative mesh grid, where resolution (grid cell size) is not constant, and visualization tools also uses texture with varying resolutions for efficient display. The API supports different resolutions applying bilinear interpolation when elevation, slope and curvature are required at a resolution higher (smaller cell size) than the original and using a nearest cell approach for elevations with lower resolution (larger) than the original. For material information nearest neighbor method is used since interpolation on categorical data has no meaning. Low fidelity characteristic of visualization allows use of nearest neighbor method for texture. Bilinear interpolation estimates the value at a point as the distance-weighted average of values at the closest four cell centers, and interpolation performance is just slightly inferior compared to more computationally expensive methods such as bicubic interpolation and kriging.
48 CFR 9905.506-60 - Illustrations.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., installs a computer service center to begin operations on May 1. The operating expense related to the new... operating expenses of the computer service center for the 8-month part of the cost accounting period may be... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Illustrations. 9905.506-60...
A microeconomic scheduler for parallel computers
NASA Technical Reports Server (NTRS)
Stoica, Ion; Abdel-Wahab, Hussein; Pothen, Alex
1995-01-01
We describe a scheduler based on the microeconomic paradigm for scheduling on-line a set of parallel jobs in a multiprocessor system. In addition to the classical objectives of increasing the system throughput and reducing the response time, we consider fairness in allocating system resources among the users, and providing the user with control over the relative performances of his jobs. We associate with every user a savings account in which he receives money at a constant rate. When a user wants to run a job, he creates an expense account for that job to which he transfers money from his savings account. The job uses the funds in its expense account to obtain the system resources it needs for execution. The share of the system resources allocated to the user is directly related to the rate at which the user receives money; the rate at which the user transfers money into a job expense account controls the job's performance. We prove that starvation is not possible in our model. Simulation results show that our scheduler improves both system and user performances in comparison with two different variable partitioning policies. It is also shown to be effective in guaranteeing fairness and providing control over the performance of jobs.
Faller, Christina E; Raman, E Prabhu; MacKerell, Alexander D; Guvench, Olgun
2015-01-01
Fragment-based drug design (FBDD) involves screening low molecular weight molecules ("fragments") that correspond to functional groups found in larger drug-like molecules to determine their binding to target proteins or nucleic acids. Based on the principle of thermodynamic additivity, two fragments that bind nonoverlapping nearby sites on the target can be combined to yield a new molecule whose binding free energy is the sum of those of the fragments. Experimental FBDD approaches, like NMR and X-ray crystallography, have proven very useful but can be expensive in terms of time, materials, and labor. Accordingly, a variety of computational FBDD approaches have been developed that provide different levels of detail and accuracy.The Site Identification by Ligand Competitive Saturation (SILCS) method of computational FBDD uses all-atom explicit-solvent molecular dynamics (MD) simulations to identify fragment binding. The target is "soaked" in an aqueous solution with multiple fragments having different identities. The resulting computational competition assay reveals what small molecule types are most likely to bind which regions of the target. From SILCS simulations, 3D probability maps of fragment binding called "FragMaps" can be produced. Based on the probabilities relative to bulk, SILCS FragMaps can be used to determine "Grid Free Energies (GFEs)," which provide per-atom contributions to fragment binding affinities. For essentially no additional computational overhead relative to the production of the FragMaps, GFEs can be used to compute Ligand Grid Free Energies (LGFEs) for arbitrarily complex molecules, and these LGFEs can be used to rank-order the molecules in accordance with binding affinities.
Sorensen, Mads Solvsten; Mosegaard, Jesper; Trier, Peter
2009-06-01
Existing virtual simulators for middle ear surgery are based on 3-dimensional (3D) models from computed tomographic or magnetic resonance imaging data in which image quality is limited by the lack of detail (maximum, approximately 50 voxels/mm3), natural color, and texture of the source material.Virtual training often requires the purchase of a program, a customized computer, and expensive peripherals dedicated exclusively to this purpose. The Visible Ear freeware library of digital images from a fresh-frozen human temporal bone was segmented, and real-time volume rendered as a 3D model of high-fidelity, true color, and great anatomic detail and realism of the surgically relevant structures. A haptic drilling model was developed for surgical interaction with the 3D model. Realistic visualization in high-fidelity (approximately 125 voxels/mm3) and true color, 2D, or optional anaglyph stereoscopic 3D was achieved on a standard Core 2 Duo personal computer with a GeForce 8,800 GTX graphics card, and surgical interaction was provided through a relatively inexpensive (approximately $2,500) Phantom Omni haptic 3D pointing device. This prototype is published for download (approximately 120 MB) as freeware at http://www.alexandra.dk/ves/index.htm.With increasing personal computer performance, future versions may include enhanced resolution (up to 8,000 voxels/mm3) and realistic interaction with deformable soft tissue components such as skin, tympanic membrane, dura, and cholesteatomas-features some of which are not possible with computed tomographic-/magnetic resonance imaging-based systems.
NASA Astrophysics Data System (ADS)
Escobar Gómez, J. D.; Torres-Verdín, C.
2018-03-01
Single-well pressure-diffusion simulators enable improved quantitative understanding of hydraulic-testing measurements in the presence of arbitrary spatial variations of rock properties. Simulators of this type implement robust numerical algorithms which are often computationally expensive, thereby making the solution of the forward modeling problem onerous and inefficient. We introduce a time-domain perturbation theory for anisotropic permeable media to efficiently and accurately approximate the transient pressure response of spatially complex aquifers. Although theoretically valid for any spatially dependent rock/fluid property, our single-phase flow study emphasizes arbitrary spatial variations of permeability and anisotropy, which constitute key objectives of hydraulic-testing operations. Contrary to time-honored techniques, the perturbation method invokes pressure-flow deconvolution to compute the background medium's permeability sensitivity function (PSF) with a single numerical simulation run. Subsequently, the first-order term of the perturbed solution is obtained by solving an integral equation that weighs the spatial variations of permeability with the spatial-dependent and time-dependent PSF. Finally, discrete convolution transforms the constant-flow approximation to arbitrary multirate conditions. Multidimensional numerical simulation studies for a wide range of single-well field conditions indicate that perturbed solutions can be computed in less than a few CPU seconds with relative errors in pressure of <5%, corresponding to perturbations in background permeability of up to two orders of magnitude. Our work confirms that the proposed joint perturbation-convolution (JPC) method is an efficient alternative to analytical and numerical solutions for accurate modeling of pressure-diffusion phenomena induced by Neumann or Dirichlet boundary conditions.
Chen, Po-Chia; Hologne, Maggy; Walker, Olivier
2017-03-02
Rotational diffusion (D rot ) is a fundamental property of biomolecules that contains information about molecular dimensions and solute-solvent interactions. While ab initio D rot prediction can be achieved by explicit all-atom molecular dynamics simulations, this is hindered by both computational expense and limitations in water models. We propose coarse-grained force fields as a complementary solution, and show that the MARTINI force field with elastic networks is sufficient to compute D rot in >10 proteins spanning 5-157 kDa. We also adopt a quaternion-based approach that computes D rot orientation directly from autocorrelations of best-fit rotations as used in, e.g., RMSD algorithms. Over 2 μs trajectories, isotropic MARTINI+EN tumbling replicates experimental values to within 10-20%, with convergence analyses suggesting a minimum sampling of >50 × τ theor to achieve sufficient precision. Transient fluctuations in anisotropic tumbling cause decreased precision in predictions of axisymmetric anisotropy and rhombicity, the latter of which cannot be precisely evaluated within 2000 × τ theor for GB3. Thus, we encourage reporting of axial decompositions D x , D y , D z to ease comparability between experiment and simulation. Where protein disorder is absent, we observe close replication of MARTINI+EN D rot orientations versus CHARMM22*/TIP3p and experimental data. This work anticipates the ab initio prediction of NMR-relaxation by combining coarse-grained global motions with all-atom local motions.
26 CFR 1.50B-1 - Definitions of WIN expenses and WIN employees.
Code of Federal Regulations, 2010 CFR
2010-04-01
... employee. (c) Trade or business expenses. The term “WIN expenses” includes only salaries and wages which... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Definitions of WIN expenses and WIN employees. 1... INCOME TAXES Rules for Computing Credit for Expenses of Work Incentive Programs § 1.50B-1 Definitions of...
47 CFR 32.6112 - Motor vehicle expense.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Motor vehicle expense. 32.6112 Section 32.6112 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS.../or to other Plant Specific Operations Expense accounts. These amounts shall be computed on the basis...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garaud, Pascale; Brummell, Nicholas
2015-12-10
Fingering convection (otherwise known as thermohaline convection) is an instability that occurs in stellar radiative interiors in the presence of unstable compositional gradients. Numerical simulations have been used in order to estimate the efficiency of mixing induced by this instability. However, fully three-dimensional (3D) computations in the parameter regime appropriate for stellar astrophysics (i.e., low Prandtl number) are prohibitively expensive. This raises the question of whether two-dimensional (2D) simulations could be used instead to achieve the same goals. In this work, we address this issue by comparing the outcome of 2D and 3D simulations of fingering convection at low Prandtlmore » number. We find that 2D simulations are never appropriate. However, we also find that the required 3D computational domain does not have to be very wide: the third dimension only needs to contain a minimum of two wavelengths of the fastest-growing linearly unstable mode to capture the essentially 3D dynamics of small-scale fingering. Narrow domains, however, should still be used with caution since they could limit the subsequent development of any large-scale dynamics typically associated with fingering convection.« less
Challenges in the development of very high resolution Earth System Models for climate science
NASA Astrophysics Data System (ADS)
Rasch, Philip J.; Xie, Shaocheng; Ma, Po-Lun; Lin, Wuyin; Wan, Hui; Qian, Yun
2017-04-01
The authors represent the 20+ members of the ACME atmosphere development team. The US Department of Energy (DOE) has, like many other organizations around the world, identified the need for an Earth System Model capable of rapid completion of decade to century length simulations at very high (vertical and horizontal) resolution with good climate fidelity. Two years ago DOE initiated a multi-institution effort called ACME (Accelerated Climate Modeling for Energy) to meet this an extraordinary challenge, targeting a model eventually capable of running at 10-25km horizontal and 20-400m vertical resolution through the troposphere on exascale computational platforms at speeds sufficient to complete 5+ simulated years per day. I will outline the challenges our team has encountered in development of the atmosphere component of this model, and the strategies we have been using for tuning and debugging a model that we can barely afford to run on today's computational platforms. These strategies include: 1) evaluation at lower resolutions; 2) ensembles of short simulations to explore parameter space, and perform rough tuning and evaluation; 3) use of regionally refined versions of the model for probing high resolution model behavior at less expense; 4) use of "auto-tuning" methodologies for model tuning; and 5) brute force long climate simulations.
Molecular Dynamics based on a Generalized Born solvation model: application to protein folding
NASA Astrophysics Data System (ADS)
Onufriev, Alexey
2004-03-01
An accurate description of the aqueous environment is essential for realistic biomolecular simulations, but may become very expensive computationally. We have developed a version of the Generalized Born model suitable for describing large conformational changes in macromolecules. The model represents the solvent implicitly as continuum with the dielectric properties of water, and include charge screening effects of salt. The computational cost associated with the use of this model in Molecular Dynamics simulations is generally considerably smaller than the cost of representing water explicitly. Also, compared to traditional Molecular Dynamics simulations based on explicit water representation, conformational changes occur much faster in implicit solvation environment due to the absence of viscosity. The combined speed-up allow one to probe conformational changes that occur on much longer effective time-scales. We apply the model to folding of a 46-residue three helix bundle protein (residues 10-55 of protein A, PDB ID 1BDD). Starting from an unfolded structure at 450 K, the protein folds to the lowest energy state in 6 ns of simulation time, which takes about a day on a 16 processor SGI machine. The predicted structure differs from the native one by 2.4 A (backbone RMSD). Analysis of the structures seen on the folding pathway reveals details of the folding process unavailable form experiment.
NASA Astrophysics Data System (ADS)
Ramotar, Lokendra; Rohrauer, Greg L.; Filion, Ryan; MacDonald, Kathryn
2017-03-01
The development of a dynamic thermal battery model for hybrid and electric vehicles is realized. A thermal equivalent circuit model is created which aims to capture and understand the heat propagation from the cells through the entire pack and to the environment using a production vehicle battery pack for model validation. The inclusion of production hardware and the liquid battery thermal management system components into the model considers physical and geometric properties to calculate thermal resistances of components (conduction, convection and radiation) along with their associated heat capacity. Various heat sources/sinks comprise the remaining model elements. Analog equivalent circuit simulations using PSpice are compared to experimental results to validate internal temperature nodes and heat rates measured through various elements, which are then employed to refine the model further. Agreement with experimental results indicates the proposed method allows for a comprehensive real-time battery pack analysis at little computational expense when compared to other types of computer based simulations. Elevated road and ambient conditions in Mesa, Arizona are simulated on a parked vehicle with varying quiescent cooling rates to examine the effect on the diurnal battery temperature for longer term static exposure. A typical daily driving schedule is also simulated and examined.
Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.
Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang
2007-01-01
Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.
GPU-powered model analysis with PySB/cupSODA.
Harris, Leonard A; Nobile, Marco S; Pino, James C; Lubbock, Alexander L R; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo; Lopez, Carlos F
2017-11-01
A major barrier to the practical utilization of large, complex models of biochemical systems is the lack of open-source computational tools to evaluate model behaviors over high-dimensional parameter spaces. This is due to the high computational expense of performing thousands to millions of model simulations required for statistical analysis. To address this need, we have implemented a user-friendly interface between cupSODA, a GPU-powered kinetic simulator, and PySB, a Python-based modeling and simulation framework. For three example models of varying size, we show that for large numbers of simulations PySB/cupSODA achieves order-of-magnitude speedups relative to a CPU-based ordinary differential equation integrator. The PySB/cupSODA interface has been integrated into the PySB modeling framework (version 1.4.0), which can be installed from the Python Package Index (PyPI) using a Python package manager such as pip. cupSODA source code and precompiled binaries (Linux, Mac OS/X, Windows) are available at github.com/aresio/cupSODA (requires an Nvidia GPU; developer.nvidia.com/cuda-gpus). Additional information about PySB is available at pysb.org. paolo.cazzaniga@unibg.it or c.lopez@vanderbilt.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Multiscale Macromolecular Simulation: Role of Evolving Ensembles
Singharoy, A.; Joshi, H.; Ortoleva, P.J.
2013-01-01
Multiscale analysis provides an algorithm for the efficient simulation of macromolecular assemblies. This algorithm involves the coevolution of a quasiequilibrium probability density of atomic configurations and the Langevin dynamics of spatial coarse-grained variables denoted order parameters (OPs) characterizing nanoscale system features. In practice, implementation of the probability density involves the generation of constant OP ensembles of atomic configurations. Such ensembles are used to construct thermal forces and diffusion factors that mediate the stochastic OP dynamics. Generation of all-atom ensembles at every Langevin timestep is computationally expensive. Here, multiscale computation for macromolecular systems is made more efficient by a method that self-consistently folds in ensembles of all-atom configurations constructed in an earlier step, history, of the Langevin evolution. This procedure accounts for the temporal evolution of these ensembles, accurately providing thermal forces and diffusions. It is shown that efficiency and accuracy of the OP-based simulations is increased via the integration of this historical information. Accuracy improves with the square root of the number of historical timesteps included in the calculation. As a result, CPU usage can be decreased by a factor of 3-8 without loss of accuracy. The algorithm is implemented into our existing force-field based multiscale simulation platform and demonstrated via the structural dynamics of viral capsomers. PMID:22978601
Simulation of Laboratory Tests of Steel Arch Support
NASA Astrophysics Data System (ADS)
Horyl, Petr; Šňupárek, Richard; Maršálek, Pavel; Pacześniowski, Krzysztof
2017-03-01
The total load-bearing capacity of steel arch yielding roadways supports is among their most important characteristics. These values can be obtained in two ways: experimental measurements in a specialized laboratory or computer modelling by FEM. Experimental measurements are significantly more expensive and more time-consuming. However, for proper tuning, a computer model is very valuable and can provide the necessary verification by experiment. In the cooperating workplaces of GIG Katowice, VSB-Technical University of Ostrava and the Institute of Geonics ASCR this verification was successful. The present article discusses the conditions and results of this verification for static problems. The output is a tuned computer model, which may be used for other calculations to obtain the load-bearing capacity of other types of steel arch supports. Changes in other parameters such as the material properties of steel, size torques, friction coefficient values etc. can be determined relatively quickly by changing the properties of the investigated steel arch supports.
NASA Technical Reports Server (NTRS)
Kanevsky, Alex
2004-01-01
My goal is to develop and implement efficient, accurate, and robust Implicit-Explicit Runge-Kutta (IMEX RK) methods [9] for overcoming geometry-induced stiffness with applications to computational electromagnetics (CEM), computational fluid dynamics (CFD) and computational aeroacoustics (CAA). IMEX algorithms solve the non-stiff portions of the domain using explicit methods, and isolate and solve the more expensive stiff portions using implicit methods. Current algorithms in CEM can only simulate purely harmonic (up to lOGHz plane wave) EM scattering by fighter aircraft, which are assumed to be pure metallic shells, and cannot handle the inclusion of coatings, penetration into and radiation out of the aircraft. Efficient MEX RK methods could potentially increase current CEM capabilities by 1-2 orders of magnitude, allowing scientists and engineers to attack more challenging and realistic problems.
NASA Astrophysics Data System (ADS)
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
NASA Astrophysics Data System (ADS)
Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.
2016-10-01
Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.
NASA Astrophysics Data System (ADS)
Konduri, Aditya
Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.
Optimized blind gamma-ray pulsar searches at fixed computing budget
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pletsch, Holger J.; Clark, Colin J., E-mail: holger.pletsch@aei.mpg.de
The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fullymore » coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.« less
NASA Astrophysics Data System (ADS)
Saxena, Nishank; Hows, Amie; Hofmann, Ronny; Alpak, Faruk O.; Freeman, Justin; Hunter, Sander; Appel, Matthias
2018-06-01
This study defines the optimal operating envelope of the Digital Rock technology from the perspective of imaging and numerical simulations of transport properties. Imaging larger volumes of rocks for Digital Rock Physics (DRP) analysis improves the chances of achieving a Representative Elementary Volume (REV) at which flow-based simulations (1) do not vary with change in rock volume, and (2) is insensitive to the choice of boundary conditions. However, this often comes at the expense of image resolution. This trade-off exists due to the finiteness of current state-of-the-art imaging detectors. Imaging and analyzing digital rocks that sample the REV and still sufficiently resolve pore throats is critical to ensure simulation quality and robustness of rock property trends for further analysis. We find that at least 10 voxels are needed to sufficiently resolve pore throats for single phase fluid flow simulations. If this condition is not met, additional analyses and corrections may allow for meaningful comparisons between simulation results and laboratory measurements of permeability, but some cases may fall outside the current technical feasibility of DRP. On the other hand, we find that the ratio of field of view and effective grain size provides a reliable measure of the REV for siliciclastic rocks. If this ratio is greater than 5, the coefficient of variation for single-phase permeability simulations drops below 15%. These imaging considerations are crucial when comparing digitally computed rock flow properties with those measured in the laboratory. We find that the current imaging methods are sufficient to achieve both REV (with respect to numerical boundary conditions) and required image resolution to perform digital core analysis for coarse to fine-grained sandstones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, Tuan Anh; Ogitsu, Tadashi; Lau, Edmond Y.
Establishing an accurate and predictive computational framework for the description of complex aqueous solutions is an ongoing challenge for density functional theory based first-principles molecular dynamics (FPMD) simulations. In this context, important advances have been made in recent years, including the development of sophisticated exchange-correlation functionals. On the other hand, simulations based on simple generalized gradient approximation (GGA) functionals remain an active field, particularly in the study of complex aqueous solutions due to a good balance between the accuracy, computational expense, and the applicability to a wide range of systems. In such simulations we often perform them at elevated temperaturesmore » to artificially “correct” for GGA inaccuracies in the description of liquid water; however, a detailed understanding of how the choice of temperature affects the structure and dynamics of other components, such as solvated ions, is largely unknown. In order to address this question, we carried out a series of FPMD simulations at temperatures ranging from 300 to 460 K for liquid water and three representative aqueous solutions containing solvated Na +, K +, and Cl - ions. We show that simulations at 390–400 K with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional yield water structure and dynamics in good agreement with experiments at ambient conditions. Simultaneously, this computational setup provides ion solvation structures and ion effects on water dynamics consistent with experiments. These results suggest that an elevated temperature around 390–400 K with the PBE functional can be used for the description of structural and dynamical properties of liquid water and complex solutions with solvated ions at ambient conditions.« less
Pham, Tuan Anh; Ogitsu, Tadashi; Lau, Edmond Y.; ...
2016-10-17
Establishing an accurate and predictive computational framework for the description of complex aqueous solutions is an ongoing challenge for density functional theory based first-principles molecular dynamics (FPMD) simulations. In this context, important advances have been made in recent years, including the development of sophisticated exchange-correlation functionals. On the other hand, simulations based on simple generalized gradient approximation (GGA) functionals remain an active field, particularly in the study of complex aqueous solutions due to a good balance between the accuracy, computational expense, and the applicability to a wide range of systems. In such simulations we often perform them at elevated temperaturesmore » to artificially “correct” for GGA inaccuracies in the description of liquid water; however, a detailed understanding of how the choice of temperature affects the structure and dynamics of other components, such as solvated ions, is largely unknown. In order to address this question, we carried out a series of FPMD simulations at temperatures ranging from 300 to 460 K for liquid water and three representative aqueous solutions containing solvated Na +, K +, and Cl - ions. We show that simulations at 390–400 K with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional yield water structure and dynamics in good agreement with experiments at ambient conditions. Simultaneously, this computational setup provides ion solvation structures and ion effects on water dynamics consistent with experiments. These results suggest that an elevated temperature around 390–400 K with the PBE functional can be used for the description of structural and dynamical properties of liquid water and complex solutions with solvated ions at ambient conditions.« less
Combining configurational energies and forces for molecular force field optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.
While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less
Combining configurational energies and forces for molecular force field optimization
Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.
2017-07-21
While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less
A coupled ALE-AMR method for shock hydrodynamics
Waltz, J.; Bakosi, J.
2018-03-05
We present a numerical method combining adaptive mesh refinement (AMR) with arbitrary Lagrangian-Eulerian (ALE) mesh motion for the simulation of shock hydrodynamics on unstructured grids. The primary goal of the coupled method is to use AMR to reduce numerical error in ALE simulations at reduced computational expense relative to uniform fine mesh calculations, in the same manner that AMR has been used in Eulerian simulations. We also identify deficiencies with ALE methods that AMR is able to mitigate, and discuss the unique coupling challenges. The coupled method is demonstrated using three-dimensional unstructured meshes of up to O(10 7) tetrahedral cells.more » Convergence of ALE-AMR solutions towards both uniform fine mesh ALE results and analytic solutions is demonstrated. Speed-ups of 5-10× for a given level of error are observed relative to uniform fine mesh calculations.« less
A coupled ALE-AMR method for shock hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waltz, J.; Bakosi, J.
We present a numerical method combining adaptive mesh refinement (AMR) with arbitrary Lagrangian-Eulerian (ALE) mesh motion for the simulation of shock hydrodynamics on unstructured grids. The primary goal of the coupled method is to use AMR to reduce numerical error in ALE simulations at reduced computational expense relative to uniform fine mesh calculations, in the same manner that AMR has been used in Eulerian simulations. We also identify deficiencies with ALE methods that AMR is able to mitigate, and discuss the unique coupling challenges. The coupled method is demonstrated using three-dimensional unstructured meshes of up to O(10 7) tetrahedral cells.more » Convergence of ALE-AMR solutions towards both uniform fine mesh ALE results and analytic solutions is demonstrated. Speed-ups of 5-10× for a given level of error are observed relative to uniform fine mesh calculations.« less
Modeling molecular mixing in a spatially inhomogeneous turbulent flow
NASA Astrophysics Data System (ADS)
Meyer, Daniel W.; Deb, Rajdeep
2012-02-01
Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.
Management of queues in out-patient departments: the use of computer simulation.
Aharonson-Daniel, L; Paul, R J; Hedley, A J
1996-01-01
Notes that patients attending public outpatient departments in Hong Kong spend a long time waiting for a short consultation, that clinics are congested and that both staff and patients are dissatisfied. Points out that experimentation of management changes in a busy clinical environment can be both expensive and difficult. Demonstrates computerized simulation modelling as a potential tool for clarifying processes occurring within such systems, improving clinic operation by suggesting possible answers to problems identified and evaluating the solutions, without interfering with the clinic routine. Adds that solutions can be implemented after they had proved to be successful on the model. Demonstrates some ways in which managers in health care facilities can benefit from the use of computerized simulation modelling. Specifically, shows the effect of changing the duration of consultation and the effect of the application of an appointment system on patients' waiting time.
Study of ceramic products and processing techniques in space. [using computerized simulation
NASA Technical Reports Server (NTRS)
Markworth, A. J.; Oldfield, W.
1974-01-01
An analysis of the solidification kinetics of beta alumina in a zero-gravity environment was carried out, using computer-simulation techniques, in order to assess the feasibility of producing high-quality single crystals of this material in space. The two coupled transport processes included were movement of the solid-liquid interface and diffusion of sodium atoms in the melt. Results of the simulation indicate that appreciable crystal-growth rates can be attained in space. Considerations were also made of the advantages offered by high-quality single crystals of beta alumina for use as a solid electrolyte; these clearly indicate that space-grown materials are superior in many respects to analogous terrestrially-grown crystals. Likewise, economic considerations, based on the rapidly expanding technological applications for beta alumina and related fast ionic conductors, reveal that the many superior qualities of space-grown material justify the added expense and experimental detail associated with space processing.
Simulating Halos with the Caterpillar Project
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-04-01
The Caterpillar Project is a beautiful series of high-resolution cosmological simulations. The goal of this project is to examine the evolution of dark-matter halos like the Milky Ways, to learn about how galaxies like ours formed. This immense computational project is still in progress, but the Caterpillar team is already providing a look at some of its first results.Lessons from Dark-Matter HalosWhy simulate the dark-matter halos of galaxies? Observationally, the formation history of our galaxy is encoded in galactic fossil record clues, like the tidal debris from disrupted satellite galaxies in the outer reaches of our galaxy, or chemical abundance patterns throughout our galactic disk and stellar halo.But to interpret this information in a way that lets us learn about our galaxys history, we need to first test galaxy formation and evolution scenarios via cosmological simulations. Then we can compare the end result of these simulations to what we observe today.This figure illustrates the difference that mass resolution makes. In the left panel, the mass resolution is 1.5*10^7 solar masses per particle. In the right panel, the mass resolution is 3*10^4 solar masses per particle [Griffen et al. 2016]A Computational ChallengeDue to how computationally expensive such simulations are, previous N-body simulations of the growth of Milky-Way-like halos have consisted of only one or a few halos each. But in order to establish a statistical understanding of how galaxy halos form and find out whether the Milky Ways halo is typical or unusual! it is necessary to simulate a larger number of halos.In addition, in order to accurately follow the formation and evolution of substructure within the dark-matter halos, these simulations must be able to resolve the smallest dwarf galaxies, which are around a million solar masses. This requires an extremely high mass resolution, which adds to the computational expense of the simulation.First OutcomesThese are the challenges faced by the Caterpillar Project, detailed in a recently published paper led by Brendan Griffen (Massachusetts Institute of Technology). The Caterpillar Project was designed to simulate 70 Milky-Way-size halos (quadrupling the total number of halos that have been simulated in the past!) at a high mass resolution (10,000 solar masses per particle) and time resolution (5 Myr per snapshot). The project is extremely computationally intense, requiring 14 million CPU hours and 700 TB of data storage!Mass evolution of the first 24 Caterpillar halos (selected to be Milky-Way-size at z=0). The inset panel shows the mass evolution normalized by the halo mass at z=0, demonstrating the highly varied evolution these different halos undergo. [Griffen et al. 2016]In this first study, the Griffen and collaboratorsshow the end states for the first 24 halos of the project, evolved from a large redshift to today (z=0). They use these initialresults to demonstrate the integrity of their data and the utility of their methods, which include new halo-finding techniques that recover more substructure within each halo.The first results from the Caterpillar Project are already enough to show clear general trends, such as the highly variable paths the different halos take as they merge, accrete, and evolve, as well as how different their ends states can be. Statistically examining the evolution of these halos is an importantnext step in providinginsight intothe origin and evolution of the Milky Way, and helping us to understand how our galaxy differs from other galaxies of similar mass. Keep an eye out for future results from this project!BonusCheck out this video (make sure to watch in HD!) of how the first 24 Milky-Way-like halos from the Caterpillar simulations form. Seeingthese halos evolve simultaneously is an awesome way to identifythe similarities and differences between them.CitationBrendan F. Griffen et al 2016 ApJ 818 10. doi:10.3847/0004-637X/818/1/10
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-11-01
In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.
A Test of the Validity of Inviscid Wall-Modeled LES
NASA Astrophysics Data System (ADS)
Redman, Andrew; Craft, Kyle; Aikens, Kurt
2015-11-01
Computational expense is one of the main deterrents to more widespread use of large eddy simulations (LES). As such, it is important to reduce computational costs whenever possible. In this vein, it may be reasonable to assume that high Reynolds number flows with turbulent boundary layers are inviscid when using a wall model. This assumption relies on the grid being too coarse to resolve either the viscous length scales in the outer flow or those near walls. We are not aware of other studies that have suggested or examined the validity of this approach. The inviscid wall-modeled LES assumption is tested here for supersonic flow over a flat plate on three different grids. Inviscid and viscous results are compared to those of another wall-modeled LES as well as experimental data - the results appear promising. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively, with the current LES application. Recommendations are presented as are future areas of research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.
Kadam, Shantanu; Vanka, Kumar
2013-02-15
Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations. Copyright © 2012 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik
2014-05-16
Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less
Impact of pharmacy automation on patient waiting time: an application of computer simulation.
Tan, Woan Shin; Chua, Siang Li; Yong, Keng Woh; Wu, Tuck Seng
2009-06-01
This paper aims to illustrate the use of computer simulation in evaluating the impact of a prototype automated dispensing system on waiting time in an outpatient pharmacy and its potential as a routine tool in pharmacy management. A discrete event simulation model was developed to investigate the impact of a prototype automated dispensing system on operational efficiency and service standards in an outpatient pharmacy. The simulation results suggest that automating the prescription-filing function using a prototype that picks and packs at 20 seconds per item will not assist the pharmacy in achieving the waiting time target of 30 minutes for all patients. Regardless of the state of automation, to meet the waiting time target, 2 additional pharmacists are needed to overcome the process bottleneck at the point of medication dispense. However, if the automated dispensing is the preferred option, the speed of the system needs to be twice as fast as the current configuration to facilitate the reduction of the 95th percentile patient waiting time to below 30 minutes. The faster processing speed will concomitantly allow the pharmacy to reduce the number of pharmacy technicians from 11 to 8. Simulation was found to be a useful and low cost method that allows an otherwise expensive and resource intensive evaluation of new work processes and technology to be completed within a short time.
Semi-physical simulation test for micro CMOS star sensor
NASA Astrophysics Data System (ADS)
Yang, Jian; Zhang, Guang-jun; Jiang, Jie; Fan, Qiao-yun
2008-03-01
A designed star sensor must be extensively tested before launching. Testing star sensor requires complicated process with much time and resources input. Even observing sky on the ground is a challenging and time-consuming job, requiring complicated and expensive equipments, suitable time and location, and prone to be interfered by weather. And moreover, not all stars distributed on the sky can be observed by this testing method. Semi-physical simulation in laboratory reduces the testing cost and helps to debug, analyze and evaluate the star sensor system while developing the model. The test system is composed of optical platform, star field simulator, star field simulator computer, star sensor and the central data processing computer. The test system simulates the starlight with high accuracy and good parallelism, and creates static or dynamic image in FOV (Field of View). The conditions of the test are close to observing real sky. With this system, the test of a micro star tracker designed by Beijing University of Aeronautics and Astronautics has been performed successfully. Some indices including full-sky autonomous star identification time, attitude update frequency and attitude precision etc. meet design requirement of the star sensor. Error source of the testing system is also analyzed. It is concluded that the testing system is cost-saving, efficient, and contributes to optimizing the embed arithmetic, shortening the development cycle and improving engineering design processes.
Load Balancing Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearce, Olga Tkachyshyn
2014-12-01
The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one atmore » the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.« less
Optimal design of wind barriers using 3D computational fluid dynamics simulations
NASA Astrophysics Data System (ADS)
Fang, H.; Wu, X.; Yang, X.
2017-12-01
Desertification is a significant global environmental and ecological problem that requires human-regulated control and management. Wind barriers are commonly used to reduce wind velocity or trap drifting sand in arid or semi-arid areas. Therefore, optimal design of wind barriers becomes critical in Aeolian engineering. In the current study, we perform 3D computational fluid dynamics (CFD) simulations for flow passing through wind barriers with different structural parameters. To validate the simulation results, we first inter-compare the simulated flow field results with those from both wind-tunnel experiments and field measurements. Quantitative analyses of the shelter effect are then conducted based on a series of simulations with different structural parameters (such as wind barrier porosity, row numbers, inter-row spacing and belt schemes). The results show that wind barriers with porosity of 0.35 could provide the longest shelter distance (i.e., where the wind velocity reduction is more than 50%) thus are recommended in engineering designs. To determine the optimal row number and belt scheme, we introduce a cost function that takes both wind-velocity reduction effects and economical expense into account. The calculated cost function show that a 3-row-belt scheme with inter-row spacing of 6h (h as the height of wind barriers) and inter-belt spacing of 12h is the most effective.
47 CFR 32.6121 - Land and building expense.
Code of Federal Regulations, 2013 CFR
2013-10-01
... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...
47 CFR 32.6121 - Land and building expense.
Code of Federal Regulations, 2012 CFR
2012-10-01
... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...
47 CFR 32.6121 - Land and building expense.
Code of Federal Regulations, 2011 CFR
2011-10-01
... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...
47 CFR 32.6121 - Land and building expense.
Code of Federal Regulations, 2014 CFR
2014-10-01
... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...
47 CFR 32.6121 - Land and building expense.
Code of Federal Regulations, 2010 CFR
2010-10-01
... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
7 CFR 1484.53 - What are the requirements for documenting and reporting contributions?
Code of Federal Regulations, 2010 CFR
2010-01-01
... contribution must be documented by the Cooperator, showing the method of computing non-cash contributions, salaries, and travel expenses. (b) Each Cooperator must keep records of the methods used to compute the value of non-cash contributions, and (1) Copies of invoices or receipts for expenses paid by the U.S...
Modeling Physiological Systems in the Human Body as Networks of Quasi-1D Fluid Flows
NASA Astrophysics Data System (ADS)
Staples, Anne
2008-11-01
Extensive research has been done on modeling human physiology. Most of this work has been aimed at developing detailed, three-dimensional models of specific components of physiological systems, such as a cell, a vein, a molecule, or a heart valve. While efforts such as these are invaluable to our understanding of human biology, if we were to construct a global model of human physiology with this level of detail, computing even a nanosecond in this computational being's life would certainly be prohibitively expensive. With this in mind, we derive the Pulsed Flow Equations, a set of coupled one-dimensional partial differential equations, specifically designed to capture two-dimensional viscous, transport, and other effects, and aimed at providing accurate and fast-to-compute global models for physiological systems represented as networks of quasi one-dimensional fluid flows. Our goal is to be able to perform faster-than-real time simulations of global processes in the human body on desktop computers.
Analysis of Gas-Particle Flows through Multi-Scale Simulations
NASA Astrophysics Data System (ADS)
Gu, Yile
Multi-scale structures are inherent in gas-solid flows, which render the modeling efforts challenging. On one hand, detailed simulations where the fine structures are resolved and particle properties can be directly specified can account for complex flow behaviors, but they are too computationally expensive to apply for larger systems. On the other hand, coarse-grained simulations demand much less computations but they necessitate constitutive models which are often not readily available for given particle properties. The present study focuses on addressing this issue, as it seeks to provide a general framework through which one can obtain the required constitutive models from detailed simulations. To demonstrate the viability of this general framework in which closures can be proposed for different particle properties, we focus on the van der Waals force of interaction between particles. We start with Computational Fluid Dynamics (CFD) - Discrete Element Method (DEM) simulations where the fine structures are resolved and van der Waals force between particles can be directly specified, and obtain closures for stress and drag that are required for coarse-grained simulations. Specifically, we develop a new cohesion model that appropriately accounts for van der Waals force between particles to be used for CFD-DEM simulations. We then validate this cohesion model and the CFD-DEM approach by showing that it can qualitatively capture experimental results where the addition of small particles to gas fluidization reduces bubble sizes. Based on the DEM and CFD-DEM simulation results, we propose stress models that account for the van der Waals force between particles. Finally, we apply machine learning, specifically neural networks, to obtain a drag model that captures the effects from fine structures and inter-particle cohesion. We show that this novel approach using neural networks, which can be readily applied for other closures other than drag here, can take advantage of the large amount of data generated from simulations, and therefore offer superior modeling performance over traditional approaches.
Applying ``intelligent`` materials for materials education: The Labless Lab{trademark}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrade, J.D.; Scheer, R.
1994-12-31
A very large number of science and engineering courses taught in colleges and universities today do not involve laboratories. Although good instructors incorporate class demonstrations, hands on homework, and various teaching aids, including computer simulations, the fact is that students in such courses often accept key concepts and experimental results without discovering them for themselves. The only partial solution to this problem has been increasing use of class demonstrations and computer simulations. The authors feel strongly that many complex concepts can be observed and assimilated through experimentation with properly designed materials. They propose the development of materials and specimens designedmore » specifically for education purposes. Intelligent and communicative materials are ideal for this purpose. Specimens which respond in an observable fashion to new environments and situations provided by the students/experimenter provide a far more effective materials science and engineering experience than readouts and data generated by complex and expensive machines, particularly in an introductory course. Modern materials can be designed to literally communicate with the observer. The authors embarked on a project to develop a series of Labless Labs{trademark} utilizing various degrees and levels of intelligence in materials. It is expected that such Labless Labs{trademark} would be complementary to textbooks and computer simulations and to be used to provide a reality for students in courses and other learning situations where access to a laboratory is non-existent or limited.« less
Efficient generation of low-energy folded states of a model protein
NASA Astrophysics Data System (ADS)
Gordon, Heather L.; Kwan, Wai Kei; Gong, Chunhang; Larrass, Stefan; Rothstein, Stuart M.
2003-01-01
A number of short simulated annealing runs are performed on a highly-frustrated 46-"residue" off-lattice model protein. We perform, in an iterative fashion, a principal component analysis of the 946 nonbonded interbead distances, followed by two varieties of cluster analyses: hierarchical and k-means clustering. We identify several distinct sets of conformations with reasonably consistent cluster membership. Nonbonded distance constraints are derived for each cluster and are employed within a distance geometry approach to generate many new conformations, previously unidentified by the simulated annealing experiments. Subsequent analyses suggest that these new conformations are members of the parent clusters from which they were generated. Furthermore, several novel, previously unobserved structures with low energy were uncovered, augmenting the ensemble of simulated annealing results, and providing a complete distribution of low-energy states. The computational cost of this approach to generating low-energy conformations is small when compared to the expense of further Monte Carlo simulated annealing runs.
Sensitivity of electrospray molecular dynamics simulations to long-range Coulomb interaction models
NASA Astrophysics Data System (ADS)
Mehta, Neil A.; Levin, Deborah A.
2018-03-01
Molecular dynamics (MD) electrospray simulations of 1-ethyl-3-methylimidazolium tetrafluoroborate (EMIM-BF4) ion liquid were performed with the goal of evaluating the influence of long-range Coulomb models on ion emission characteristics. The direct Coulomb (DC), shifted force Coulomb sum (SFCS), and particle-particle particle-mesh (PPPM) long-range Coulomb models were considered in this work. The DC method with a sufficiently large cutoff radius was found to be the most accurate approach for modeling electrosprays, but, it is computationally expensive. The Coulomb potential energy modeled by the DC method in combination with the radial electric fields were found to be necessary to generate the Taylor cone. The differences observed between the SFCS and the DC in terms of predicting the total ion emission suggest that the former should not be used in MD electrospray simulations. Furthermore, the common assumption of domain periodicity was observed to be detrimental to the accuracy of the capillary-based electrospray simulations.
Best Practices for Crash Modeling and Simulation
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Jackson, Karen E.
2002-01-01
Aviation safety can be greatly enhanced by the expeditious use of computer simulations of crash impact. Unlike automotive impact testing, which is now routine, experimental crash tests of even small aircraft are expensive and complex due to the high cost of the aircraft and the myriad of crash impact conditions that must be considered. Ultimately, the goal is to utilize full-scale crash simulations of aircraft for design evaluation and certification. The objective of this publication is to describe "best practices" for modeling aircraft impact using explicit nonlinear dynamic finite element codes such as LS-DYNA, DYNA3D, and MSC.Dytran. Although "best practices" is somewhat relative, it is hoped that the authors' experience will help others to avoid some of the common pitfalls in modeling that are not documented in one single publication. In addition, a discussion of experimental data analysis, digital filtering, and test-analysis correlation is provided. Finally, some examples of aircraft crash simulations are described in several appendices following the main report.
Sensitivity of electrospray molecular dynamics simulations to long-range Coulomb interaction models.
Mehta, Neil A; Levin, Deborah A
2018-03-01
Molecular dynamics (MD) electrospray simulations of 1-ethyl-3-methylimidazolium tetrafluoroborate (EMIM-BF_{4}) ion liquid were performed with the goal of evaluating the influence of long-range Coulomb models on ion emission characteristics. The direct Coulomb (DC), shifted force Coulomb sum (SFCS), and particle-particle particle-mesh (PPPM) long-range Coulomb models were considered in this work. The DC method with a sufficiently large cutoff radius was found to be the most accurate approach for modeling electrosprays, but, it is computationally expensive. The Coulomb potential energy modeled by the DC method in combination with the radial electric fields were found to be necessary to generate the Taylor cone. The differences observed between the SFCS and the DC in terms of predicting the total ion emission suggest that the former should not be used in MD electrospray simulations. Furthermore, the common assumption of domain periodicity was observed to be detrimental to the accuracy of the capillary-based electrospray simulations.
Need for speed: An optimized gridding approach for spatially explicit disease simulations.
Sellman, Stefan; Tsao, Kimberly; Tildesley, Michael J; Brommesson, Peter; Webb, Colleen T; Wennergren, Uno; Keeling, Matt J; Lindström, Tom
2018-04-01
Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power.
Need for speed: An optimized gridding approach for spatially explicit disease simulations
Tildesley, Michael J.; Brommesson, Peter; Webb, Colleen T.; Wennergren, Uno; Lindström, Tom
2018-01-01
Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power. PMID:29624574
26 CFR 1.213-1 - Medical, dental, etc., expenses.
Code of Federal Regulations, 2010 CFR
2010-04-01
... medical care includes the diagnosis, cure, mitigation, treatment, or prevention of disease. Expenses paid... taxable year for insurance that constitute expenses paid for medical care shall, for purposes of computing... care of the taxpayer, his spouse, or a dependent of the taxpayer and not be compensated for by...
26 CFR 1.556-2 - Adjustments to taxable income.
Code of Federal Regulations, 2010 CFR
2010-04-01
... of deductions for trade or business expenses and depreciation which are allocable to the operation... computed without the deduction of the amount disallowed under section 556(b)(5), relating to expenses and... disallowed under section 556(b)(5), relating to expenses and depreciation applicable to property of the...
Numerical simulation of ice accretion phenomena on rotor blade of axial blower
NASA Astrophysics Data System (ADS)
Matsuura, Taiki; Suzuki, Masaya; Yamamoto, Makoto; Shishido, Shinichiro; Murooka, Takeshi; Miyagawa, Hiroshi
2012-08-01
Ice accretion is the phenomenon that super-cooled water droplets impinge and accrete on a body. It is well known that ice accretion on blades and airfoils leads to performance degradation and severe accidents. For this reason, experimental investigations have been carried out using flight tests or icing tunnels. However, it is too expensive, dangerous, and difficult to set actual icing conditions. Hence, computational fluid dynamics is useful to predict ice accretion. A rotor blade is one of jet engine components where ice accretes. Therefore, the authors focus on the ice accretion on a rotor blade in this study. Three-dimensional icing phenomena on the rotor blade of a commercial axial blower are computed here, and ice accretion on the rotor blade is numerically investigated.
Overlapped Fourier coding for optical aberration removal
Horstmeyer, Roarke; Ou, Xiaoze; Chung, Jaebum; Zheng, Guoan; Yang, Changhuei
2014-01-01
We present an imaging procedure that simultaneously optimizes a camera’s resolution and retrieves a sample’s phase over a sequence of snapshots. The technique, termed overlapped Fourier coding (OFC), first digitally pans a small aperture across a camera’s pupil plane with a spatial light modulator. At each aperture location, a unique image is acquired. The OFC algorithm then fuses these low-resolution images into a full-resolution estimate of the complex optical field incident upon the detector. Simultaneously, the algorithm utilizes redundancies within the acquired dataset to computationally estimate and remove unknown optical aberrations and system misalignments via simulated annealing. The result is an imaging system that can computationally overcome its optical imperfections to offer enhanced resolution, at the expense of taking multiple snapshots over time. PMID:25321982
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Unsteady Analysis of Inlet-Compressor Acoustic Interactions Using Coupled 3-D and 1-D CFD Codes
NASA Technical Reports Server (NTRS)
Suresh, A.; Cole, G. L.
2000-01-01
It is well known that the dynamic response of a mixed compression supersonic inlet is very sensitive to the boundary condition imposed at the subsonic exit (engine face) of the inlet. In previous work, a 3-D computational fluid dynamics (CFD) inlet code (NPARC) was coupled at the engine face to a 3-D turbomachinery code (ADPAC) simulating an isolated rotor and the coupled simulation used to study the unsteady response of the inlet. The main problem with this approach is that the high fidelity turbomachinery simulation becomes prohibitively expensive as more stages are included in the simulation. In this paper, an alternative approach is explored, wherein the inlet code is coupled to a lesser fidelity 1-D transient compressor code (DYNTECC) which simulates the whole compressor. The specific application chosen for this evaluation is the collapsing bump experiment performed at the University of Cincinnati, wherein reflections of a large-amplitude acoustic pulse from a compressor were measured. The metrics for comparison are the pulse strength (time integral of the pulse amplitude) and wave form (shape). When the compressor is modeled by stage characteristics the computed strength is about ten percent greater than that for the experiment, but the wave shapes are in poor agreement. An alternate approach that uses a fixed rise in duct total pressure and temperature (so-called 'lossy' duct) to simulate a compressor gives good pulse shapes but the strength is about 30 percent low.
Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.
Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo
2016-12-13
The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.
Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes
Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J.; Wang, Liliang; Lin, Jianguo
2016-01-01
The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions. PMID:28060298
Theory for the solvation of nonpolar solutes in water
NASA Astrophysics Data System (ADS)
Urbic, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Dill, K. A.
2007-11-01
We recently developed an angle-dependent Wertheim integral equation theory (IET) of the Mercedes-Benz (MB) model of pure water [Silverstein et al., J. Am. Chem. Soc. 120, 3166 (1998)]. Our approach treats explicitly the coupled orientational constraints within water molecules. The analytical theory offers the advantage of being less computationally expensive than Monte Carlo simulations by two orders of magnitude. Here we apply the angle-dependent IET to studying the hydrophobic effect, the transfer of a nonpolar solute into MB water. We find that the theory reproduces the Monte Carlo results qualitatively for cold water and quantitatively for hot water.
Theory for the solvation of nonpolar solutes in water.
Urbic, T; Vlachy, V; Kalyuzhnyi, Yu V; Dill, K A
2007-11-07
We recently developed an angle-dependent Wertheim integral equation theory (IET) of the Mercedes-Benz (MB) model of pure water [Silverstein et al., J. Am. Chem. Soc. 120, 3166 (1998)]. Our approach treats explicitly the coupled orientational constraints within water molecules. The analytical theory offers the advantage of being less computationally expensive than Monte Carlo simulations by two orders of magnitude. Here we apply the angle-dependent IET to studying the hydrophobic effect, the transfer of a nonpolar solute into MB water. We find that the theory reproduces the Monte Carlo results qualitatively for cold water and quantitatively for hot water.
Multidisciplinary optimization of an HSCT wing using a response surface methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giunta, A.A.; Grossman, B.; Mason, W.H.
1994-12-31
Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less
NASA Astrophysics Data System (ADS)
Sagui, Celeste; Pedersen, Lee G.; Darden, Thomas A.
2004-01-01
The accurate simulation of biologically active macromolecules faces serious limitations that originate in the treatment of electrostatics in the empirical force fields. The current use of "partial charges" is a significant source of errors, since these vary widely with different conformations. By contrast, the molecular electrostatic potential (MEP) obtained through the use of a distributed multipole moment description, has been shown to converge to the quantum MEP outside the van der Waals surface, when higher order multipoles are used. However, in spite of the considerable improvement to the representation of the electronic cloud, higher order multipoles are not part of current classical biomolecular force fields due to the excessive computational cost. In this paper we present an efficient formalism for the treatment of higher order multipoles in Cartesian tensor formalism. The Ewald "direct sum" is evaluated through a McMurchie-Davidson formalism [L. McMurchie and E. Davidson, J. Comput. Phys. 26, 218 (1978)]. The "reciprocal sum" has been implemented in three different ways: using an Ewald scheme, a particle mesh Ewald (PME) method, and a multigrid-based approach. We find that even though the use of the McMurchie-Davidson formalism considerably reduces the cost of the calculation with respect to the standard matrix implementation of multipole interactions, the calculation in direct space remains expensive. When most of the calculation is moved to reciprocal space via the PME method, the cost of a calculation where all multipolar interactions (up to hexadecapole-hexadecapole) are included is only about 8.5 times more expensive than a regular AMBER 7 [D. A. Pearlman et al., Comput. Phys. Commun. 91, 1 (1995)] implementation with only charge-charge interactions. The multigrid implementation is slower but shows very promising results for parallelization. It provides a natural way to interface with continuous, Gaussian-based electrostatics in the future. It is hoped that this new formalism will facilitate the systematic implementation of higher order multipoles in classical biomolecular force fields.
Self-consistent perturbation theory for two dimensional twisted bilayers
NASA Astrophysics Data System (ADS)
Shirodkar, Sharmila N.; Tritsaris, Georgios A.; Kaxiras, Efthimios
Theoretical modeling and ab-initio simulations of two dimensional heterostructures with arbitrary angles of rotation between layers involve unrealistically large and expensive calculations. To overcome this shortcoming, we develop a methodology for weakly interacting heterostructures that treats the effect of one layer on the other as perturbation, and restricts the calculations to their primitive cells. Thus, avoiding computationally expensive supercells. We start by approximating the interaction potential between the twisted bilayers to that of a hypothetical configuration (viz. ideally stacked untwisted layers), which produces band structures in reasonable agreement with full-scale ab-initio calculations for commensurate and twisted bilayers of graphene (Gr) and Gr/hexagonal boron nitride (h-BN) heterostructures. We then self-consistently calculate the charge density and hence, interaction potential of the heterostructures. In this work, we test our model for bilayers of various combinations of Gr, h-BN and transition metal dichalcogenides, and discuss the advantages and shortcomings of the self-consistently calculated interaction potential. Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA.
Faller, Christina E.; Raman, E. Prabhu; MacKerell, Alexander D.; Guvench, Olgun
2015-01-01
Fragment-based drug design (FBDD) involves screening low molecular weight molecules (“fragments”) that correspond to functional groups found in larger drug-like molecules to determine their binding to target proteins or nucleic acids. Based on the principle of thermodynamic additivity, two fragments that bind non-overlapping nearby sites on the target can be combined to yield a new molecule whose binding free energy is the sum of those of the fragments. Experimental FBDD approaches, like NMR and X-ray crystallography, have proven very useful but can be expensive in terms of time, materials, and labor. Accordingly, a variety of computational FBDD approaches have been developed that provide different levels of detail and accuracy. The Site Identification by Ligand Competitive Saturation (SILCS) method of computational FBDD uses all-atom explicit-solvent molecular dynamics (MD) simulations to identify fragment binding. The target is “soaked” in an aqueous solution with multiple fragments having different identities. The resulting computational competition assay reveals what small molecule types are most likely to bind which regions of the target. From SILCS simulations, 3D probability maps of fragment binding called “FragMaps” can be produced. Based on the probabilities relative to bulk, SILCS FragMaps can be used to determine “Grid Free Energies (GFEs),” which provide per-atom contributions to fragment binding affinities. For essentially no additional computational overhead relative to the production of the FragMaps, GFEs can be used to compute Ligand Grid Free Energies (LGFEs) for arbitrarily complex molecules, and these LGFEs can be used to rank-order the molecules in accordance with binding affinities. PMID:25709034
CAPRI: Using a Geometric Foundation for Computational Analysis and Design
NASA Technical Reports Server (NTRS)
Haimes, Robert
2002-01-01
CAPRI (Computational Analysis Programming Interface) is a software development tool intended to make computerized design, simulation and analysis faster and more efficient. The computational steps traditionally taken for most engineering analysis (Computational Fluid Dynamics (CFD), structural analysis, etc.) are: Surface Generation, usually by employing a Computer Aided Design (CAD) system; Grid Generation, preparing the volume for the simulation; Flow Solver, producing the results at the specified operational point; Post-processing Visualization, interactively attempting to understand the results. It should be noted that the structures problem is more tractable than CFD; there are fewer mesh topologies used and the grids are not as fine (this problem space does not have the length scaling issues of fluids). For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. In most cases, the output from a CAD system could go IGES files. The output from Grid Generators and Solvers do not really have standards though there are a couple of file formats that can be used for a subset of the gridding (i.e. PLOT3D) data formats and the upcoming CGNS). The user would have to patch up the data or translate from one format to another to move to the next step. Sometimes this could take days. Instead of the serial approach to analysis, CAPRI takes a geometry centric approach. CAPRI is a software building tool-kit that refers to two ideas: (1) A simplified, object-oriented, hierarchical view of a solid part integrating both geometry and topology definitions, and (2) programming access to this part or assembly and any attached data. The connection to the geometry is made through an Application Programming Interface (API) and not a file system.
Modeling Endovascular Coils as Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Yadollahi Farsani, H.; Herrmann, M.; Chong, B.; Frakes, D.
2016-12-01
Minimally invasive surgeries are the stat-of-the-art treatments for many pathologies. Treating brain aneurysms is no exception; invasive neurovascular clipping is no longer the only option and endovascular coiling has introduced itself as the most common treatment. Coiling isolates the aneurysm from blood circulation by promoting thrombosis within the aneurysm. One approach to studying intra-aneurysmal hemodynamics consists of virtually deploying finite element coil models and then performing computational fluid dynamics. However, this approach is often computationally expensive and requires extensive resources to perform. The porous medium approach has been considered as an alternative to the conventional coil modeling approach because it lessens the complexities of computational fluid dynamics simulations by reducing the number of mesh elements needed to discretize the domain. There have been a limited number of attempts at treating the endovascular coils as homogeneous porous media. However, the heterogeneity associated with coil configurations requires a more accurately defined porous medium in which the porosity and permeability change throughout the domain. We implemented this approach by introducing a lattice of sample volumes and utilizing techniques available in the field of interactive computer graphics. We observed that the introduction of the heterogeneity assumption was associated with significant changes in simulated aneurysmal flow velocities as compared to the homogeneous assumption case. Moreover, as the sample volume size was decreased, the flow velocities approached an asymptotical value, showing the importance of the sample volume size selection. These results demonstrate that the homogeneous assumption for porous media that are inherently heterogeneous can lead to considerable errors. Additionally, this modeling approach allowed us to simulate post-treatment flows without considering the explicit geometry of a deployed endovascular coil mass, greatly simplifying computation.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Partnership for Edge Physics (EPSI), University of Texas Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moser, Robert; Carey, Varis; Michoski, Craig
Simulations of tokamak plasmas require a number of inputs whose values are uncertain. The effects of these input uncertainties on the reliability of model predictions is of great importance when validating predictions by comparison to experimental observations, and when using the predictions for design and operation of devices. However, high fidelity simulation of tokamak plasmas, particular those aimed at characterization of the edge plasma physics, are computationally expensive, so lower cost surrogates are required to enable practical uncertainty estimates. Two surrogate modeling techniques have been explored in the context of tokamak plasma simulations using the XGC family of plasma simulationmore » codes. The first is a response surface surrogate, and the second is an augmented surrogate relying on scenario extrapolation. In addition, to reduce the costs of the XGC simulations, a particle resampling algorithm was developed, which allows marker particle distributions to be adjusted to maintain optimal importance sampling. This means that the total number of particles in and therefore the cost of a simulation can be reduced while maintaining the same accuracy.« less
Virtual planning for craniomaxillofacial surgery--7 years of experience.
Adolphs, Nicolai; Haberl, Ernst-Johannes; Liu, Weichen; Keeve, Erwin; Menneking, Horst; Hoffmeister, Bodo
2014-07-01
Contemporary computer-assisted surgery systems more and more allow for virtual simulation of even complex surgical procedures with increasingly realistic predictions. Preoperative workflows are established and different commercially software solutions are available. Potential and feasibility of virtual craniomaxillofacial surgery as an additional planning tool was assessed retrospectively by comparing predictions and surgical results. Since 2006 virtual simulation has been performed in selected patient cases affected by complex craniomaxillofacial disorders (n = 8) in addition to standard surgical planning based on patient specific 3d-models. Virtual planning could be performed for all levels of the craniomaxillofacial framework within a reasonable preoperative workflow. Simulation of even complex skeletal displacements corresponded well with the real surgical result and soft tissue simulation proved to be helpful. In combination with classic 3d-models showing the underlying skeletal pathology virtual simulation improved planning and transfer of craniomaxillofacial corrections. Additional work and expenses may be justified by increased possibilities of visualisation, information, instruction and documentation in selected craniomaxillofacial procedures. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
A physical-based gas-surface interaction model for rarefied gas flow simulation
NASA Astrophysics Data System (ADS)
Liang, Tengfei; Li, Qi; Ye, Wenjing
2018-01-01
Empirical gas-surface interaction models, such as the Maxwell model and the Cercignani-Lampis model, are widely used as the boundary condition in rarefied gas flow simulations. The accuracy of these models in the prediction of macroscopic behavior of rarefied gas flows is less satisfactory in some cases especially the highly non-equilibrium ones. Molecular dynamics simulation can accurately resolve the gas-surface interaction process at atomic scale, and hence can predict accurate macroscopic behavior. They are however too computationally expensive to be applied in real problems. In this work, a statistical physical-based gas-surface interaction model, which complies with the basic relations of boundary condition, is developed based on the framework of the washboard model. In virtue of its physical basis, this new model is capable of capturing some important relations/trends for which the classic empirical models fail to model correctly. As such, the new model is much more accurate than the classic models, and in the meantime is more efficient than MD simulations. Therefore, it can serve as a more accurate and efficient boundary condition for rarefied gas flow simulations.
Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction
NASA Astrophysics Data System (ADS)
Keith, Theo G., Jr.; Hixon, Duane R.
2002-07-01
Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively coarse grid, the numerical solution is effectively filtered into a directly calculated mean flow with the small-scale turbulence being modeled, and an unsteady large-scale component that is also being directly calculated. In this way, the unsteady disturbances are calculated in a nonlinear way, with a direct effect on the mean flow. This method is not as fast as the LEE approach, but does have many advantages to recommend it; however, like the LEE approach, only the effect of the largest unsteady structures will be captured. An initial calculation was performed on a supersonic jet exhaust plume, with promising results, but the calculation was hampered by the explicit time marching scheme that was employed. This explicit scheme required a very small time step to resolve the nozzle boundary layer, which caused a long run time. Current work is focused on testing a lower-order implicit time marching method to combat this problem.
A method for spectral DNS of low Rm channel flows based on the least dissipative modes
NASA Astrophysics Data System (ADS)
Kornet, Kacper; Pothérat, Alban
2015-10-01
We put forward a new type of spectral method for the direct numerical simulation of flows where anisotropy or very fine boundary layers are present. The main idea is to take advantage of the fact that such structures are dissipative and that their presence should reduce the number of degrees of freedom of the flow, when paradoxically, their fine resolution incurs extra computational cost in most current methods. The principle of this method is to use a functional basis with elements that already include these fine structures so as to avoid these extra costs. This leads us to develop an algorithm to implement a spectral method for arbitrary functional bases, and in particular, non-orthogonal ones. We construct a basic implementation of this algorithm to simulate magnetohydrodynamic (MHD) channel flows with an externally imposed, transverse magnetic field, where very thin boundary layers are known to develop along the channel walls. In this case, the sought functional basis can be built out of the eigenfunctions of the dissipation operator, which incorporate these boundary layers, and it turns out to be non-orthogonal. We validate this new scheme against numerical simulations of freely decaying MHD turbulence based on a finite volume code and it is found to provide accurate results. Its ability to fully resolve wall-bounded turbulence with a number of modes close to that required by the dynamics is demonstrated on a simple example. This opens the way to full-blown simulations of MHD turbulence under very high magnetic fields. Until now such simulations were too computationally expensive. In contrast to traditional methods the computational cost of the proposed method, does not depend on the intensity of the magnetic field.
Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-01-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911
Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-05-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.
NASA Astrophysics Data System (ADS)
Lu, D.; Ricciuto, D. M.; Evans, K. J.
2017-12-01
Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.
Recovery Schemes for Primitive Variables in General-relativistic Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Siegel, Daniel M.; Mösta, Philipp; Desai, Dhruv; Wu, Samantha
2018-05-01
General-relativistic magnetohydrodynamic (GRMHD) simulations are an important tool to study a variety of astrophysical systems such as neutron star mergers, core-collapse supernovae, and accretion onto compact objects. A conservative GRMHD scheme numerically evolves a set of conservation equations for “conserved” quantities and requires the computation of certain primitive variables at every time step. This recovery procedure constitutes a core part of any conservative GRMHD scheme and it is closely tied to the equation of state (EOS) of the fluid. In the quest to include nuclear physics, weak interactions, and neutrino physics, state-of-the-art GRMHD simulations employ finite-temperature, composition-dependent EOSs. While different schemes have individually been proposed, the recovery problem still remains a major source of error, failure, and inefficiency in GRMHD simulations with advanced microphysics. The strengths and weaknesses of the different schemes when compared to each other remain unclear. Here we present the first systematic comparison of various recovery schemes used in different dynamical spacetime GRMHD codes for both analytic and tabulated microphysical EOSs. We assess the schemes in terms of (i) speed, (ii) accuracy, and (iii) robustness. We find large variations among the different schemes and that there is not a single ideal scheme. While the computationally most efficient schemes are less robust, the most robust schemes are computationally less efficient. More robust schemes may require an order of magnitude more calls to the EOS, which are computationally expensive. We propose an optimal strategy of an efficient three-dimensional Newton–Raphson scheme and a slower but more robust one-dimensional scheme as a fall-back.
Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu; Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715
2014-11-28
To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution.more » Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.« less
NASA Technical Reports Server (NTRS)
Fletcher, Lauren E.; Aldridge, Ann M.; Wheelwright, Charles; Maida, James
1997-01-01
Task illumination has a major impact on human performance: What a person can perceive in his environment significantly affects his ability to perform tasks, especially in space's harsh environment. Training for lighting conditions in space has long depended on physical models and simulations to emulate the effect of lighting, but such tests are expensive and time-consuming. To evaluate lighting conditions not easily simulated on Earth, personnel at NASA Johnson Space Center's (JSC) Graphics Research and Analysis Facility (GRAF) have been developing computerized simulations of various illumination conditions using the ray-tracing program, Radiance, developed by Greg Ward at Lawrence Berkeley Laboratory. Because these computer simulations are only as accurate as the data used, accurate information about the reflectance properties of materials and light distributions is needed. JSC's Lighting Environment Test Facility (LETF) personnel gathered material reflectance properties for a large number of paints, metals, and cloths used in the Space Shuttle and Space Station programs, and processed these data into reflectance parameters needed for the computer simulations. They also gathered lamp distribution data for most of the light sources used, and validated the ability to accurately simulate lighting levels by comparing predictions with measurements for several ground-based tests. The result of this study is a database of material reflectance properties for a wide variety of materials, and lighting information for most of the standard light sources used in the Shuttle/Station programs. The combination of the Radiance program and GRAF's graphics capability form a validated computerized lighting simulation capability for NASA.
Fast recovery of free energy landscapes via diffusion-map-directed molecular dynamics.
Preto, Jordane; Clementi, Cecilia
2014-09-28
The reaction pathways characterizing macromolecular systems of biological interest are associated with high free energy barriers. Resorting to the standard all-atom molecular dynamics (MD) to explore such critical regions may be inappropriate as the time needed to observe the relevant transitions can be remarkably long. In this paper, we present a new method called Extended Diffusion-Map-directed Molecular Dynamics (extended DM-d-MD) used to enhance the sampling of MD trajectories in such a way as to rapidly cover all important regions of the free energy landscape including deep metastable states and critical transition paths. Moreover, extended DM-d-MD was combined with a reweighting scheme enabling to save on-the-fly information about the Boltzmann distribution. Our algorithm was successfully applied to two systems, alanine dipeptide and alanine-12. Due to the enhanced sampling, the Boltzmann distribution is recovered much faster than in plain MD simulations. For alanine dipeptide, we report a speedup of one order of magnitude with respect to plain MD simulations. For alanine-12, our algorithm allows us to highlight all important unfolded basins in several days of computation when one single misfolded event is barely observable within the same amount of computational time by plain MD simulations. Our method is reaction coordinate free, shows little dependence on the a priori knowledge of the system, and can be implemented in such a way that the biased steps are not computationally expensive with respect to MD simulations thus making our approach well adapted for larger complex systems from which little information is known.
TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...
2015-04-16
Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less
NASA Astrophysics Data System (ADS)
Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen
2016-11-01
In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.
Choudhuri, Samir; Bharadwaj, Somnath; Roy, Nirupam; Ghosh, Abhik; Ali, Sk Saiyad
2016-06-11
It is important to correctly subtract point sources from radio-interferometric data in order to measure the power spectrum of diffuse radiation like the Galactic synchrotron or the Epoch of Reionization 21-cm signal. It is computationally very expensive and challenging to image a very large area and accurately subtract all the point sources from the image. The problem is particularly severe at the sidelobes and the outer parts of the main lobe where the antenna response is highly frequency dependent and the calibration also differs from that of the phase centre. Here, we show that it is possible to overcome this problem by tapering the sky response. Using simulated 150 MHz observations, we demonstrate that it is possible to suppress the contribution due to point sources from the outer parts by using the Tapered Gridded Estimator to measure the angular power spectrum C ℓ of the sky signal. We also show from the simulation that this method can self-consistently compute the noise bias and accurately subtract it to provide an unbiased estimation of C ℓ .
GRAPE-6A: A Single-Card GRAPE-6 for Parallel PC-GRAPE Cluster Systems
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki; Makino, Junichiro; Kawai, Atsushi
2005-12-01
In this paper, we describe the design and performance of GRAPE-6A, a special-purpose computer for gravitational many-body simulations. It was designed to be used with a PC cluster, in which each node has one GRAPE-6A. Such a configuration is particularly cost-effective in running parallel tree algorithms. Though the use of parallel tree algorithms was possible with the original GRAPE-6 hardware, it was not very cost-effective since a single GRAPE-6 board was still too fast and too expensive. Therefore, we designed GRAPE-6A as a single PCI card to minimize the reproduction cost and to optimize the computing speed. The peak performance is 130 Gflops for one GRAPE-6A board and 3.1 Tflops for our 24 node cluster. We describe the implementation of the tree, TreePM and individual timestep algorithms on both a single GRAPE-6A system and GRAPE-6A cluster. Using the tree algorithm on our 16-node GRAPE-6A system, we can complete a collisionless simulation with 100 million particles (8000 steps) within 10 days.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.; Wu, Chris K.; Lin, Y. H.
1991-01-01
A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.
Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling
NASA Astrophysics Data System (ADS)
Her, Y. G.
2017-12-01
Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological observations such as soil moisture and radar rainfall depth and by sharing the model and its codes in public domain, respectively.
A Multi-Fidelity Surrogate Model for the Equation of State for Mixtures of Real Gases
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Koneru, Rahul; Balachandar, S.; Rollin, Bertrand
2017-11-01
The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the products of detonated explosives must be treated as real gases while the ideal gas equation of state is used for the ambient air. As the products expand outward, they mix with the air and create a region where both state equations must be satisfied. One of the most accurate, yet expensive, methods to handle this problem is an algorithm that iterates between both state equations until both pressure and thermal equilibrium are achieved inside of each computational cell. This work creates a multi-fidelity surrogate model to replace this process. This is achieved by using a Kriging model to produce a curve fit which interpolates selected data from the iterative algorithm. The surrogate is optimized for computing speed and model accuracy by varying the number of sampling points chosen to construct the model. The performance of the surrogate with respect to the iterative method is tested in simulations using a finite volume code. The model's computational speed and accuracy are analyzed to show the benefits of this novel approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.
Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Brandt, Achi; Thomas, James L.; Diskin, Boris
2001-01-01
Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the basis of this approach. Because the governing equations are a set of coupled nonlinear conservation equations with discontinuities (shocks, slip lines, etc.) and singularities (flow- or grid-induced), the difficulties are many. This paper summarizes recent progress towards the attainment of TME in basic CFD simulations.
NASA Astrophysics Data System (ADS)
Majumdar, Suman; Mellema, Garrelt; Datta, Kanan K.; Jensen, Hannes; Choudhury, T. Roy; Bharadwaj, Somnath; Friedrich, Martina M.
2014-10-01
We present a detailed comparison of three different simulations of the epoch of reionization (EoR). The radiative transfer simulation (C2-RAY) among them is our benchmark. Radiative transfer codes can produce realistic results, but are computationally expensive. We compare it with two seminumerical techniques: one using the same haloes as C2-RAY as its sources (Sem-Num), and one using a conditional Press-Schechter scheme (CPS+GS). These are vastly more computationally efficient than C2-RAY, but use more simplistic physical assumptions. We evaluate these simulations in terms of their ability to reproduce the history and morphology of reionization. We find that both Sem-Num and CPS+GS can produce an ionization history and morphology that is very close to C2-RAY, with Sem-Num performing slightly better compared to CPS+GS. We also study different redshift-space observables of the 21-cm signal from EoR: the variance, power spectrum and its various angular multipole moments. We find that both seminumerical models perform reasonably well in predicting these observables at length scales relevant for present and future experiments. However, Sem-Num performs slightly better than CPS+GS in producing the reionization history, which is necessary for interpreting the future observations. The CPS+GS scheme, however, has the advantage that it is not restricted by the mass resolution of the dark matter density field.
Modeling chemical vapor deposition of silicon dioxide in microreactors at atmospheric pressure
NASA Astrophysics Data System (ADS)
Konakov, S. A.; Krzhizhanovskaya, V. V.
2015-01-01
We developed a multiphysics mathematical model for simulation of silicon dioxide Chemical Vapor Deposition (CVD) from tetraethyl orthosilicate (TEOS) and oxygen mixture in a microreactor at atmospheric pressure. Microfluidics is a promising technology with numerous applications in chemical synthesis due to its high heat and mass transfer efficiency and well-controlled flow parameters. Experimental studies of CVD microreactor technology are slow and expensive. Analytical solution of the governing equations is impossible due to the complexity of intertwined non-linear physical and chemical processes. Computer simulation is the most effective tool for design and optimization of microreactors. Our computational fluid dynamics model employs mass, momentum and energy balance equations for a laminar transient flow of a chemically reacting gas mixture at low Reynolds number. Simulation results show the influence of microreactor configuration and process parameters on SiO2 deposition rate and uniformity. We simulated three microreactors with the central channel diameter of 5, 10, 20 micrometers, varying gas flow rate in the range of 5-100 microliters per hour and temperature in the range of 300-800 °C. For each microchannel diameter we found an optimal set of process parameters providing the best quality of deposited material. The model will be used for optimization of the microreactor configuration and technological parameters to facilitate the experimental stage of this research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoang, Tuan L.; Physical and Life Sciences Directorate, Lawrence Livermore National Laboratory, CA 94550; Marian, Jaime, E-mail: jmarian@ucla.edu
2015-11-01
An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a proceduremore » for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe{sup 3+}, He{sup +} and H{sup +}) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.« less
Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic
Guillas, S.; Georgiopoulou, A.; Dias, F.
2017-01-01
Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained. PMID:28484339
NASA Astrophysics Data System (ADS)
Tian, Liang; Wilkinson, Richard; Yang, Zhibing; Power, Henry; Fagerlund, Fritjof; Niemi, Auli
2017-08-01
We explore the use of Gaussian process emulators (GPE) in the numerical simulation of CO2 injection into a deep heterogeneous aquifer. The model domain is a two-dimensional, log-normally distributed stochastic permeability field. We first estimate the cumulative distribution functions (CDFs) of the CO2 breakthrough time and the total CO2 mass using a computationally expensive Monte Carlo (MC) simulation. We then show that we can accurately reproduce these CDF estimates with a GPE, using only a small fraction of the computational cost required by traditional MC simulation. In order to build a GPE that can predict the simulator output from a permeability field consisting of 1000s of values, we use a truncated Karhunen-Loève (K-L) expansion of the permeability field, which enables the application of the Bayesian functional regression approach. We perform a cross-validation exercise to give an insight of the optimization of the experiment design for selected scenarios: we find that it is sufficient to use 100s values for the size of training set and that it is adequate to use as few as 15 K-L components. Our work demonstrates that GPE with truncated K-L expansion can be effectively applied to uncertainty analysis associated with modelling of multiphase flow and transport processes in heterogeneous media.
Fermion-to-qubit mappings with varying resource requirements for quantum simulation
NASA Astrophysics Data System (ADS)
Steudtner, Mark; Wehner, Stephanie
2018-06-01
The mapping of fermionic states onto qubit states, as well as the mapping of fermionic Hamiltonian into quantum gates enables us to simulate electronic systems with a quantum computer. Benefiting the understanding of many-body systems in chemistry and physics, quantum simulation is one of the great promises of the coming age of quantum computers. Interestingly, the minimal requirement of qubits for simulating Fermions seems to be agnostic of the actual number of particles as well as other symmetries. This leads to qubit requirements that are well above the minimal requirements as suggested by combinatorial considerations. In this work, we develop methods that allow us to trade-off qubit requirements against the complexity of the resulting quantum circuit. We first show that any classical code used to map the state of a fermionic Fock space to qubits gives rise to a mapping of fermionic models to quantum gates. As an illustrative example, we present a mapping based on a nonlinear classical error correcting code, which leads to significant qubit savings albeit at the expense of additional quantum gates. We proceed to use this framework to present a number of simpler mappings that lead to qubit savings with a more modest increase in gate difficulty. We discuss the role of symmetries such as particle conservation, and savings that could be obtained if an experimental platform could easily realize multi-controlled gates.
Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic.
Salmanidou, D M; Guillas, S; Georgiopoulou, A; Dias, F
2017-04-01
Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained.
Effects of Topography-based Subgrid Structures on Land Surface Modeling
NASA Astrophysics Data System (ADS)
Tesfa, T. K.; Ruby, L.; Brunke, M.; Thornton, P. E.; Zeng, X.; Ghan, S. J.
2017-12-01
Topography has major control on land surface processes through its influence on atmospheric forcing, soil and vegetation properties, network topology and drainage area. Consequently, accurate climate and land surface simulations in mountainous regions cannot be achieved without considering the effects of topographic spatial heterogeneity. To test a computationally less expensive hyper-resolution land surface modeling approach, we developed topography-based landunits within a hierarchical subgrid spatial structure to improve representation of land surface processes in the ACME Land Model (ALM) with minimal increase in computational demand, while improving the ability to capture the spatial heterogeneity of atmospheric forcing and land cover influenced by topography. This study focuses on evaluation of the impacts of the new spatial structures on modeling land surface processes. As a first step, we compare ALM simulations with and without subgrid topography and driven by grid cell mean atmospheric forcing to isolate the impacts of the subgrid topography on the simulated land surface states and fluxes. Recognizing that subgrid topography also has important effects on atmospheric processes that control temperature, radiation, and precipitation, methods are being developed to downscale atmospheric forcings. Hence in the second step, the impacts of the subgrid topographic structure on land surface modeling will be evaluated by including spatial downscaling of the atmospheric forcings. Preliminary results on the atmospheric downscaling and the effects of the new spatial structures on the ALM simulations will be presented.
NASA Astrophysics Data System (ADS)
Orlić, Ivica; Mekterović, Darko; Mekterović, Igor; Ivošević, Tatjana
2015-11-01
VIBA-Lab is a computer program originally developed by the author and co-workers at the National University of Singapore (NUS) as an interactive software package for simulation of Particle Induced X-ray Emission and Rutherford Backscattering Spectra. The original program is redeveloped to a VIBA-Lab 3.0 in which the user can perform semi-quantitative analysis by comparing simulated and measured spectra as well as simulate 2D elemental maps for a given 3D sample composition. The latest version has a new and more versatile user interface. It also has the latest data set of fundamental parameters such as Coster-Kronig transition rates, fluorescence yields, mass absorption coefficients and ionization cross sections for K and L lines in a wider energy range than the original program. Our short-term plan is to introduce routine for quantitative analysis for multiple PIXE and XRF excitations. VIBA-Lab is an excellent teaching tool for students and researchers in using PIXE and RBS techniques. At the same time the program helps when planning an experiment and when optimizing experimental parameters such as incident ions, their energy, detector specifications, filters, geometry, etc. By "running" a virtual experiment the user can test various scenarios until the optimal PIXE and BS spectra are obtained and in this way save a lot of expensive machine time.
NASA Astrophysics Data System (ADS)
Hoang, Tuan L.; Marian, Jaime; Bulatov, Vasily V.; Hosemann, Peter
2015-11-01
An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a procedure for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe3+, He+ and H+) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.
A Multi-Fidelity Surrogate Model for Handling Real Gas Equations of State
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S."bala"
2016-11-01
The explosive dispersal of particles is an example of a complex multiphase and multi-species fluid flow problem. This problem has many engineering applications including particle-laden explosives. In these flows, the detonation products of the explosive cannot be treated as a perfect gas so a real gas equation of state is used to close the governing equations (unlike air, which uses the ideal gas equation for closure). As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both of the state equations must be satisfied. One of the more accurate, yet computationally expensive, methods to deal with this is a scheme that iterates between the two equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work strives to create a multi-fidelity surrogate model of this process. We then study the performance of the model with respect to the iterative method by performing both gas-only and particle laden flow simulations using an Eulerian-Lagrangian approach with a finite volume code. Specifically, the model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel modeling approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.
Bayesian Model Selection under Time Constraints
NASA Astrophysics Data System (ADS)
Hoege, M.; Nowak, W.; Illman, W. A.
2017-12-01
Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.
Advanced Signal Processing for Integrated LES-RANS Simulations: Anti-aliasing Filters
NASA Technical Reports Server (NTRS)
Schlueter, J. U.
2003-01-01
Currently, a wide variety of flow phenomena are addressed with numerical simulations. Many flow solvers are optimized to simulate a limited spectrum of flow effects effectively, such as single parts of a flow system, but are either inadequate or too expensive to be applied to a very complex problem. As an example, the flow through a gas turbine can be considered. In the compressor and the turbine section, the flow solver has to be able to handle the moving blades, model the wall turbulence, and predict the pressure and density distribution properly. This can be done by a flow solver based on the Reynolds-Averaged Navier-Stokes (RANS) approach. On the other hand, the flow in the combustion chamber is governed by large scale turbulence, chemical reactions, and the presence of fuel spray. Experience shows that these phenomena require an unsteady approach. Hence, for the combustor, the use of a Large Eddy Simulation (LES) flow solver is desirable. While many design problems of a single flow passage can be addressed by separate computations, only the simultaneous computation of all parts can guarantee the proper prediction of multi-component phenomena, such as compressor/combustor instability and combustor/turbine hot-streak migration. Therefore, a promising strategy to perform full aero-thermal simulations of gas-turbine engines is the use of a RANS flow solver for the compressor sections, an LES flow solver for the combustor, and again a RANS flow solver for the turbine section.
Statistical methodologies for the control of dynamic remapping
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Nicol, D. M.
1986-01-01
Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
I. W. Ginsberg
Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less
Robust, Optimal Water Infrastructure Planning Under Deep Uncertainty Using Metamodels
NASA Astrophysics Data System (ADS)
Maier, H. R.; Beh, E. H. Y.; Zheng, F.; Dandy, G. C.; Kapelan, Z.
2015-12-01
Optimal long-term planning plays an important role in many water infrastructure problems. However, this task is complicated by deep uncertainty about future conditions, such as the impact of population dynamics and climate change. One way to deal with this uncertainty is by means of robustness, which aims to ensure that water infrastructure performs adequately under a range of plausible future conditions. However, as robustness calculations require computationally expensive system models to be run for a large number of scenarios, it is generally computationally intractable to include robustness as an objective in the development of optimal long-term infrastructure plans. In order to overcome this shortcoming, an approach is developed that uses metamodels instead of computationally expensive simulation models in robustness calculations. The approach is demonstrated for the optimal sequencing of water supply augmentation options for the southern portion of the water supply for Adelaide, South Australia. A 100-year planning horizon is subdivided into ten equal decision stages for the purpose of sequencing various water supply augmentation options, including desalination, stormwater harvesting and household rainwater tanks. The objectives include the minimization of average present value of supply augmentation costs, the minimization of average present value of greenhouse gas emissions and the maximization of supply robustness. The uncertain variables are rainfall, per capita water consumption and population. Decision variables are the implementation stages of the different water supply augmentation options. Artificial neural networks are used as metamodels to enable all objectives to be calculated in a computationally efficient manner at each of the decision stages. The results illustrate the importance of identifying optimal staged solutions to ensure robustness and sustainability of water supply into an uncertain long-term future.
NASA Astrophysics Data System (ADS)
Saxena, Nishank; Hofmann, Ronny; Alpak, Faruk O.; Berg, Steffen; Dietderich, Jesse; Agarwal, Umang; Tandon, Kunj; Hunter, Sander; Freeman, Justin; Wilson, Ove Bjorn
2017-11-01
We generate a novel reference dataset to quantify the impact of numerical solvers, boundary conditions, and simulation platforms. We consider a variety of microstructures ranging from idealized pipes to digital rocks. Pore throats of the digital rocks considered are large enough to be well resolved with state-of-the-art micro-computerized tomography technology. Permeability is computed using multiple numerical engines, 12 in total, including, Lattice-Boltzmann, computational fluid dynamics, voxel based, fast semi-analytical, and known empirical models. Thus, we provide a measure of uncertainty associated with flow computations of digital media. Moreover, the reference and standards dataset generated is the first of its kind and can be used to test and improve new fluid flow algorithms. We find that there is an overall good agreement between solvers for idealized cross-section shape pipes. As expected, the disagreement increases with increase in complexity of the pore space. Numerical solutions for pipes with sinusoidal variation of cross section show larger variability compared to pipes of constant cross-section shapes. We notice relatively larger variability in computed permeability of digital rocks with coefficient of variation (of up to 25%) in computed values between various solvers. Still, these differences are small given other subsurface uncertainties. The observed differences between solvers can be attributed to several causes including, differences in boundary conditions, numerical convergence criteria, and parameterization of fundamental physics equations. Solvers that perform additional meshing of irregular pore shapes require an additional step in practical workflows which involves skill and can introduce further uncertainty. Computation times for digital rocks vary from minutes to several days depending on the algorithm and available computational resources. We find that more stringent convergence criteria can improve solver accuracy but at the expense of longer computation time.
Automatic design and manufacture of robotic lifeforms.
Lipson, H; Pollack, J B
2000-08-31
Biological life is in control of its own means of reproduction, which generally involves complex, autocatalysing chemical reactions. But this autonomy of design and manufacture has not yet been realized artificially. Robots are still laboriously designed and constructed by teams of human engineers, usually at considerable expense. Few robots are available because these costs must be absorbed through mass production, which is justified only for toys, weapons and industrial systems such as automatic teller machines. Here we report the results of a combined computational and experimental approach in which simple electromechanical systems are evolved through simulations from basic building blocks (bars, actuators and artificial neurons); the 'fittest' machines (defined by their locomotive ability) are then fabricated robotically using rapid manufacturing technology. We thus achieve autonomy of design and construction using evolution in a 'limited universe' physical simulation coupled to automatic fabrication.
NASA Astrophysics Data System (ADS)
Rachmat, Haris; Ibrahim, M. Rasidi; Hasan, Sulaiman bin
2017-04-01
On of high technology in machining is ultrasonic vibration assisted turning. The design of tool holder was a crucial step to make sure the tool holder is enough to handle all forces on turning process. Because of the direct experimental approach is expensive, the paper studied to predict feasibility of tool holder displacement and effective stress was used the computational in finite element simulation. SS201 and AISI 1045 materials were used with sharp and ramp corners flexure hinges on design. The result shows that AISI 1045 material and which has ramp corner flexure hinge was the best choice to be produced. The displacement is around 11.3 micron and effective stress is 1.71e+008 N/m2 and also the factor of safety is 3.10.
Discretization of the induced-charge boundary integral equation.
Bardhan, Jaydeep P; Eisenberg, Robert S; Gillespie, Dirk
2009-07-01
Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.
Discretization of the induced-charge boundary integral equation
NASA Astrophysics Data System (ADS)
Bardhan, Jaydeep P.; Eisenberg, Robert S.; Gillespie, Dirk
2009-07-01
Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.
Metamodels for Computer-Based Engineering Design: Survey and Recommendations
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.
Michel, Miriam; Egender, Friedemann; Heßling, Vera; Dähnert, Ingo; Gebauer, Roman
2016-01-01
Background Postoperative junctional ectopic tachycardia (JET) occurs frequently after pediatric cardiac surgery. R-wave synchronized atrial (AVT) pacing is used to re-establish atrioventricular synchrony. AVT pacing is complex, with technical pitfalls. We sought to establish and to test a low-cost simulation model suitable for training and analysis in AVT pacing. Methods A simulation model was developed based on a JET simulator, a simulation doll, a cardiac monitor, and a pacemaker. A computer program simulated electrocardiograms. Ten experienced pediatric cardiologists tested the model. Their performance was analyzed using a testing protocol with 10 working steps. Results Four testers found the simulation model realistic; 6 found it very realistic. Nine claimed that the trial had improved their skills. All testers considered the model useful in teaching AVT pacing. The simulation test identified 5 working steps in which major mistakes in performance test may impede safe and effective AVT pacing and thus permitted specific training. The components of the model (exclusive monitor and pacemaker) cost less than $50. Assembly and training-session expenses were trivial. Conclusions A realistic, low-cost simulation model of AVT pacing is described. The model is suitable for teaching and analyzing AVT pacing technique. PMID:26943363
Bulbous head formation in bidisperse shallow granular flows over inclined planes
NASA Astrophysics Data System (ADS)
Denissen, I.; Thornton, A.; Weinhart, T.; Luding, S.
2017-12-01
Predicting the behaviour of hazardous natural granular flows (e.g. debris-flows and pyroclastic flows) is vital for an accurate assessment of the risks posed by such events. In these situations, an inversely graded vertical particle-size distribution develops, with larger particles on top of smaller particles. As the surface velocity of such flows is larger than the mean velocity, the larger material is then transported to the flow front. This creates a downstream size-segregation structure, resulting in a flow front composed purely of large particles, that are generally more frictional in geophysical flows. Thus, this segregation process reduces the mobility of the flow front, resulting in the formation of, a so-called, bulbous head. One of the main challenges of simulating these hazardous natural granular flows is the enormous number of particles they contain, which makes discrete particle simulations too computationally expensive to be practically useful. Continuum methods are able to simulate the bulk flow- and segregation behaviour of such flows, but have to make averaging approximations that reduce the huge number of degrees of freedom to a few continuum fields. Small-scale periodic discrete particle simulations can be used to determine the material parameters needed for the continuum model. In this presentation, we use a depth-averaged model to predict the flow profile for particulate chute flows, based on flow height, depth-averaged velocity and particle-size distribution [1], and show that the bulbous head structure naturally emerges from this model. The long-time behaviour of this solution of the depth-averaged continuum model converges to a novel travelling wave solution [2]. Furthermore, we validate this framework against computationally expensive 3D particle simulations, where we see surprisingly good agreement between both approaches, considering the approximations made in the continuum model. We conclude by showing that the travelling distance and height of a bidisperse granular avalanche can be well predicted by our continuum model. REFERENCES [1] M. J. Woodhouse, A. R. Thornton, C. G. Johnson, B. P. Kokelaar, J. M. N. T. Gray, J. Fluid Mech., 709, 543-580 (2012) [2] I.F.C. Denissen, T. Weinhart, A. Te Voortwis, S. Luding, J. M. N. T. Gray, A. R. Thornton, under review with J. Fluid Mech. (2017)
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-01-01
Abstract In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k−ɛ model, RNG k−ɛ model, realizable k−ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use. PMID:24302850
Two-dimensional CFD modeling of wave rotor flow dynamics
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; Chima, Rodrick V.
1994-01-01
A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. Roe's approximate Riemann solution scheme or the computationally less expensive advection upstream splitting method (AUSM) flux-splitting scheme is used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passages and the distribution of flow variables in the stationary inlet port region.
Two-dimensional CFD modeling of wave rotor flow dynamics
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; Chima, Rodrick V.
1993-01-01
A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. The Roe approximate Riemann solution scheme or the computationally less expensive Advection Upstream Splitting Method (AUSM) flux-splitting scheme are used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passage and the distribution of flow variables in the stationary inlet port region.
Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William
2013-04-30
Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible. Copyright © 2013 Wiley Periodicals, Inc.
Shape optimization of self-avoiding curves
NASA Astrophysics Data System (ADS)
Walker, Shawn W.
2016-04-01
This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.
NASA Astrophysics Data System (ADS)
Bird, Robert; Nystrom, David; Albright, Brian
2017-10-01
The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.
AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perego, A.; Cabezón, R. M.; Käppeli, R., E-mail: albino.perego@physik.tu-darmstadt.de
We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmannmore » transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.« less
ASP-G: an ASP-based method for finding attractors in genetic regulatory networks
Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine
2014-01-01
Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722
Sawle, Lucas; Ghosh, Kingshuk
2016-02-09
All-atom molecular dynamics simulations need convergence tests to evaluate the quality of data. The notion of "true" convergence is elusive, and one can only hope to satisfy self-consistency checks (SCC). There are multiple SCC criteria, and their assessment of all-atom simulations of the native state for real globular proteins is sparse. Here, we present a systematic study of different SCC algorithms, both in terms of their ability to detect the lack of self-consistency and their computational demand, for the all-atom native state simulations of four globular proteins (CSP, CheA, CheW, and BPTI). Somewhat surprisingly, we notice some of the most stringent SCC criteria, e.g., the criteria demanding similarity of the cluster probability distribution between the first and the second halves of the trajectory or the comparison of fluctuations between different blocks using covariance overlap measure, can require tens of microseconds of simulation even for proteins with less than 100 amino acids. We notice such long simulation times can sometimes be associated with traps, but these traps cannot be detected by some of the common SCC methods. We suggest an additional, and simple, SCC algorithm to quickly detect such traps by monitoring the constancy of the cluster entropy (CCE). CCE is a necessary but not sufficient criteria, and additional SCC algorithms must be combined with it. Furthermore, as seen in the explicit solvent simulation of 1 ms long trajectory of BPTI,1 passing self-consistency checks at an earlier stage may be misleading due to conformational changes taking place later in the simulation, resulting in different, but segregated regions of SCC. Although there is a hierarchy of complex SCC algorithms, caution must be exercised in their application with the knowledge of their limitations and computational expense.
FASTPM: a new scheme for fast simulations of dark matter and haloes
NASA Astrophysics Data System (ADS)
Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick
2016-12-01
We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.
Comparing Macroscale and Microscale Simulations of Porous Battery Electrodes
Higa, Kenneth; Wu, Shao-Ling; Parkinson, Dilworth Y.; ...
2017-06-22
This article describes a vertically-integrated exploration of NMC electrode rate limitations, combining experiments with corresponding macroscale (macro-homogeneous) and microscale models. Parameters common to both models were obtained from experiments or based on published results. Positive electrode tortuosity was the sole fitting parameter used in the macroscale model, while the microscale model used no fitting parameters, instead relying on microstructural domains generated from X-ray microtomography of pristine electrode material held under compression while immersed in electrolyte solution (additionally providing novel observations of electrode wetting). Macroscale simulations showed that the capacity decrease observed at higher rates resulted primarily from solution-phase diffusion resistance.more » This ability to provide such qualitative insights at low computational costs is a strength of macroscale models, made possible by neglecting electrode spatial details. To explore the consequences of such simplification, the corresponding, computationally-expensive microscale model was constructed. This was found to have limitations preventing quantitatively accurate predictions, for reasons that are discussed in the hope of guiding future work. Nevertheless, the microscale simulation results complement those of the macroscale model by providing a reality-check based on microstructural information; in particular, this novel comparison of the two approaches suggests a reexamination of salt diffusivity measurements.« less
Ground-motion signature of dynamic ruptures on rough faults
NASA Astrophysics Data System (ADS)
Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.
2016-04-01
Natural earthquakes occur on faults characterized by large-scale segmentation and small-scale roughness. This multi-scale geometrical complexity controls the dynamic rupture process, and hence strongly affects the radiated seismic waves and near-field shaking. For a fault system with given segmentation, the question arises what are the conditions for producing large-magnitude multi-segment ruptures, as opposed to smaller single-segment events. Similarly, for variable degrees of roughness, ruptures may be arrested prematurely or may break the entire fault. In addition, fault roughness induces rupture incoherence that determines the level of high-frequency radiation. Using HPC-enabled dynamic-rupture simulations, we generate physically self-consistent rough-fault earthquake scenarios (M~6.8) and their associated near-source seismic radiation. Because these computations are too expensive to be conducted routinely for simulation-based seismic hazard assessment, we thrive to develop an effective pseudo-dynamic source characterization that produces (almost) the same ground-motion characteristics. Therefore, we examine how variable degrees of fault roughness affect rupture properties and the seismic wavefield, and develop a planar-fault kinematic source representation that emulates the observed dynamic behaviour. We propose an effective workflow for improved pseudo-dynamic source modelling that incorporates rough-fault effects and its associated high-frequency radiation in broadband ground-motion computation for simulation-based seismic hazard assessment.
Designing a practical system for spectral imaging of skylight.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Lee, Raymond L
2005-09-20
In earlier work [J. Opt. Soc. Am. A 21, 13-23 (2004)], we showed that a combination of linear models and optimum Gaussian sensors obtained by an exhaustive search can recover daylight spectra reliably from broadband sensor data. Thus our algorithm and sensors could be used to design an accurate, relatively inexpensive system for spectral imaging of daylight. Here we improve our simulation of the multispectral system by (1) considering the different kinds of noise inherent in electronic devices such as change-coupled devices (CCDs) or complementary metal-oxide semiconductors (CMOS) and (2) extending our research to a different kind of natural illumination, skylight. Because exhaustive searches are expensive computationally, here we switch to a simulated annealing algorithm to define the optimum sensors for recovering skylight spectra. The annealing algorithm requires us to minimize a single cost function, and so we develop one that calculates both the spectral and colorimetric similarity of any pair of skylight spectra. We show that the simulated annealing algorithm yields results similar to the exhaustive search but with much less computational effort. Our technique lets us study the properties of optimum sensors in the presence of noise, one side effect of which is that adding more sensors may not improve the spectral recovery.
Computational investigation of flow control by means of tubercles on Darrieus wind turbine blades
NASA Astrophysics Data System (ADS)
Sevinç, K.; Özdamar, G.; Şentürk, U.; Özdamar, A.
2015-09-01
This work presents the current status of the computational study of the boundary layer control of a vertical axis wind turbine blade by modifying the blade geometry for use in wind energy conversion. The control method is a passive method which comprises the implementation of the tubercle geometry of a humpback whale flipper onto the leading edge of the blades. The baseline design is an H-type, three-bladed Darrieus turbine with a NACA 0015 cross-section. Finite-volume based software ANSYS Fluent was used in the simulations. Using the optimum control parameters for a NACA 634-021 profile given by Johari et al. (2006), turbine blades were modified. Three dimensional, unsteady, turbulent simulations for the blade were conducted to look for a possible improvement on the performance. The flow structure on the blades was investigated and flow phenomena such as separation and stall were examined to understand their impact on the overall performance. For a tip speed ratio of 2.12, good agreement was obtained in the validation of the baseline model with a relative error in time- averaged power coefficient of 1.05%. Modified turbine simulations with a less expensive but less accurate turbulence model yielded a decrease in power coefficient. Results are shown comparatively.
James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael
2009-01-01
A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.
Numerical simulation of circular cylinders in free-fall
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero-Gomez, Pedro; Richmond, Marshall C.
2016-02-01
In this work, we combined the use of (i) overset meshes, (ii) a 6 degree-of-freedom (6- DOF) motion solver, and (iii) an eddy-resolving flow simulation approach to resolve the drag and secondary movement of large-sized cylinders settling in a quiescent fluid at moderate terminal Reynolds numbers (1,500 < Re < 28,000). These three strategies were implemented in a series of computational fluid dynamics (CFD) solutions to describe the fluid-structure interactions and the resulting effects on the cylinder motion. Using the drag coefficient, oscillation period, and maximum angular displacement as baselines, the findings show good agreement between the present CFD resultsmore » and corresponding data of published laboratory experiments. We discussed the computational expense incurred in using the present modeling approach. We also conducted a preceding simulation of flow past a fixed cylinder at Re = 3,900, which tested the influence of the turbulence approach (time-averaging vs eddy-resolving) and the meshing strategy (continuous vs. overset) on the numerical results. The outputs indicated a strong effect of the former and an insignificant influence of the latter. The long-term motivation for the present study is the need to understand the motion of an autonomous sensor of cylindrical shape used to measure the hydraulic conditions occurring in operating hydropower turbines.« less
Full Coupling Between the Atmosphere, Surface, and Subsurface for Integrated Hydrologic Simulation
NASA Astrophysics Data System (ADS)
Davison, Jason Hamilton; Hwang, Hyoun-Tae; Sudicky, Edward A.; Mallia, Derek V.; Lin, John C.
2018-01-01
An ever increasing community of earth system modelers is incorporating new physical processes into numerical models. This trend is facilitated by advancements in computational resources, improvements in simulation skill, and the desire to build numerical simulators that represent the water cycle with greater fidelity. In this quest to develop a state-of-the-art water cycle model, we coupled HydroGeoSphere (HGS), a 3-D control-volume finite element surface and variably saturated subsurface flow model that includes evapotranspiration processes, to the Weather Research and Forecasting (WRF) Model, a 3-D finite difference nonhydrostatic mesoscale atmospheric model. The two-way coupled model, referred to as HGS-WRF, exchanges the actual evapotranspiration fluxes and soil saturations calculated by HGS to WRF; conversely, the potential evapotranspiration and precipitation fluxes from WRF are passed to HGS. The flexible HGS-WRF coupling method allows for unique meshes used by each model, while maintaining mass and energy conservation between the domains. Furthermore, the HGS-WRF coupling implements a subtime stepping algorithm to minimize computational expense. As a demonstration of HGS-WRF's capabilities, we applied it to the California Basin and found a strong connection between the depth to the groundwater table and the latent heat fluxes across the land surface.
Simulating Operations at a Spaceport
NASA Technical Reports Server (NTRS)
Nevins, Michael R.
2007-01-01
SPACESIM is a computer program for detailed simulation of operations at a spaceport. SPACESIM is being developed to greatly improve existing spaceports and to aid in designing, building, and operating future spaceports, given that there is a worldwide trend in spaceport operations from very expensive, research- oriented launches to more frequent commercial launches. From an operational perspective, future spaceports are expected to resemble current airports and seaports, for which it is necessary to resolve issues of safety, security, efficient movement of machinery and people, cost effectiveness, timeliness, and maximizing effectiveness in utilization of resources. Simulations can be performed, for example, to (1) simultaneously analyze launches of reusable and expendable rockets and identify bottlenecks arising from competition for limited resources or (2) perform what-if scenario analyses to identify optimal scenarios prior to making large capital investments. SPACESIM includes an object-oriented discrete-event-simulation engine. (Discrete- event simulation has been used to assess processes at modern seaports.) The simulation engine is built upon the Java programming language for maximum portability. Extensible Markup Language (XML) is used for storage of data to enable industry-standard interchange of data with other software. A graphical user interface facilitates creation of scenarios and analysis of data.
High fidelity simulations of infrared imagery with animated characters
NASA Astrophysics Data System (ADS)
Näsström, F.; Persson, A.; Bergström, D.; Berggren, J.; Hedström, J.; Allvar, J.; Karlsson, M.
2012-06-01
High fidelity simulations of IR signatures and imagery tend to be slow and do not have effective support for animation of characters. Simplified rendering methods based on computer graphics methods can be used to overcome these limitations. This paper presents a method to combine these tools and produce simulated high fidelity thermal IR data of animated people in terrain. Infrared signatures for human characters have been calculated using RadThermIR. To handle multiple character models, these calculations use a simplified material model for the anatomy and clothing. Weather and temperature conditions match the IR-texture used in the terrain model. The calculated signatures are applied to the animated 3D characters that, together with the terrain model, are used to produce high fidelity IR imagery of people or crowds. For high level animation control and crowd simulations, HLAS (High Level Animation System) has been developed. There are tools available to create and visualize skeleton based animations, but tools that allow control of the animated characters on a higher level, e.g. for crowd simulation, are usually expensive and closed source. We need the flexibility of HLAS to add animation into an HLA enabled sensor system simulation framework.
NASA Astrophysics Data System (ADS)
Cuzzone, Joshua K.; Morlighem, Mathieu; Larour, Eric; Schlegel, Nicole; Seroussi, Helene
2018-05-01
Paleoclimate proxies are being used in conjunction with ice sheet modeling experiments to determine how the Greenland ice sheet responded to past changes, particularly during the last deglaciation. Although these comparisons have been a critical component in our understanding of the Greenland ice sheet sensitivity to past warming, they often rely on modeling experiments that favor minimizing computational expense over increased model physics. Over Paleoclimate timescales, simulating the thermal structure of the ice sheet has large implications on the modeled ice viscosity, which can feedback onto the basal sliding and ice flow. To accurately capture the thermal field, models often require a high number of vertical layers. This is not the case for the stress balance computation, however, where a high vertical resolution is not necessary. Consequently, since stress balance and thermal equations are generally performed on the same mesh, more time is spent on the stress balance computation than is otherwise necessary. For these reasons, running a higher-order ice sheet model (e.g., Blatter-Pattyn) over timescales equivalent to the paleoclimate record has not been possible without incurring a large computational expense. To mitigate this issue, we propose a method that can be implemented within ice sheet models, whereby the vertical interpolation along the z axis relies on higher-order polynomials, rather than the traditional linear interpolation. This method is tested within the Ice Sheet System Model (ISSM) using quadratic and cubic finite elements for the vertical interpolation on an idealized case and a realistic Greenland configuration. A transient experiment for the ice thickness evolution of a single-dome ice sheet demonstrates improved accuracy using the higher-order vertical interpolation compared to models using the linear vertical interpolation, despite having fewer degrees of freedom. This method is also shown to improve a model's ability to capture sharp thermal gradients in an ice sheet particularly close to the bed, when compared to models using a linear vertical interpolation. This is corroborated in a thermal steady-state simulation of the Greenland ice sheet using a higher-order model. In general, we find that using a higher-order vertical interpolation decreases the need for a high number of vertical layers, while dramatically reducing model runtime for transient simulations. Results indicate that when using a higher-order vertical interpolation, runtimes for a transient ice sheet relaxation are upwards of 5 to 7 times faster than using a model which has a linear vertical interpolation, and this thus requires a higher number of vertical layers to achieve a similar result in simulated ice volume, basal temperature, and ice divide thickness. The findings suggest that this method will allow higher-order models to be used in studies investigating ice sheet behavior over paleoclimate timescales at a fraction of the computational cost than would otherwise be needed for a model using a linear vertical interpolation.
Neilson, Matthew P; Mackenzie, John A; Webb, Steven D; Insall, Robert H
2010-11-01
In this paper we present a computational tool that enables the simulation of mathematical models of cell migration and chemotaxis on an evolving cell membrane. Recent models require the numerical solution of systems of reaction-diffusion equations on the evolving cell membrane and then the solution state is used to drive the evolution of the cell edge. Previous work involved moving the cell edge using a level set method (LSM). However, the LSM is computationally very expensive, which severely limits the practical usefulness of the algorithm. To address this issue, we have employed the parameterised finite element method (PFEM) as an alternative method for evolving a cell boundary. We show that the PFEM is far more efficient and robust than the LSM. We therefore suggest that the PFEM potentially has an essential role to play in computational modelling efforts towards the understanding of many of the complex issues related to chemotaxis.
Dish layouts analysis method for concentrative solar power plant.
Xu, Jinshan; Gan, Shaocong; Li, Song; Ruan, Zhongyuan; Chen, Shengyong; Wang, Yong; Gui, Changgui; Wan, Bin
2016-01-01
Designs leading to maximize the use of sun radiation of a given reflective area without increasing the expense on investment are important to solar power plants construction. We here provide a method that allows one to compute shade area at any given time as well as the total shading effect of a day. By establishing a local coordinate system with the origin at the apex of a parabolic dish and z -axis pointing to the sun, neighboring dishes only with [Formula: see text] would shade onto the dish when in tracking mode. This procedure reduces the required computational resources, simplifies the calculation and allows a quick search for the optimum layout by considering all aspects leading to optimized arrangement: aspect ratio, shifting and rotation. Computer simulations done with information on dish Stirling system as well as DNI data released from NREL, show that regular-spacing is not an optimal layout, shifting and rotating column by certain amount can bring more benefits.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
NASA Technical Reports Server (NTRS)
Beecken, Brian P.; Kleinman, Randall R.
2004-01-01
New developments in infrared sensor technology have potentially made possible a new space-based system which can measure far-infrared radiation at lower costs (mass, power and expense). The Stationary Imaging Fourier Transform Spectrometer (SIFTS) proposed by NASA Langley Research Center, makes use of new detector array technology. A mathematical model which simulates resolution and spectral range relationships has been developed for analyzing the utility of such a radically new approach to spectroscopy. Calculations with this forward model emulate the effects of a detector array on the ability to retrieve accurate spectral features. Initial computations indicate significant attenuation at high wavenumbers.
Real-Time Aerodynamic Parameter Estimation without Air Flow Angle Measurements
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2010-01-01
A technique for estimating aerodynamic parameters in real time from flight data without air flow angle measurements is described and demonstrated. The method is applied to simulated F-16 data, and to flight data from a subscale jet transport aircraft. Modeling results obtained with the new approach using flight data without air flow angle measurements were compared to modeling results computed conventionally using flight data that included air flow angle measurements. Comparisons demonstrated that the new technique can provide accurate aerodynamic modeling results without air flow angle measurements, which are often difficult and expensive to obtain. Implications for efficient flight testing and flight safety are discussed.
NASA Astrophysics Data System (ADS)
Mohd Sakri, F.; Mat Ali, M. S.; Sheikh Salim, S. A. Z.
2016-10-01
The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained.
Risk Assessment of Carbon Sequestration into A Naturally Fractured Reservoir at Kevin Dome, Montana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Minh; Onishi, Tsubasa; Carey, James William
In this report, we describe risk assessment work done using the National Risk Assessment Partnership (NRAP) applied to CO 2 storage at Kevin Dome, Montana. Geologic CO 2 sequestration in saline aquifers poses certain risks including CO 2/brine leakage through wells or non-sealing faults into groundwater or to the land surface. These risks are difficult to quantify due to data availability and uncertainty. One solution is to explore the consequences of these limitations by running large numbers of numerical simulations on the primary CO2 injection reservoir, shallow reservoirs/aquifers, faults, and wells to assess leakage risks and uncertainties. However, a largemore » number of full-physics simulations is usually too computationally expensive. The NRAP integrated assessment model (NRAP-IAM) uses reduced order models (ROMs) developed from full-physics simulations to address this issue. A powerful stochastic framework allows NRAPIAM to explore complex interactions among many uncertain variables and evaluate the likely performance of potential sequestration sites.« less
NASA Astrophysics Data System (ADS)
Kreyca, J. F.; Falahati, A.; Kozeschnik, E.
2016-03-01
For industry, the mechanical properties of a material in form of flow curves are essential input data for finite element simulations. Current practice is to obtain flow curves experimentally and to apply fitting procedures to obtain constitutive equations that describe the material response to external loading as a function of temperature and strain rate. Unfortunately, the experimental procedure for characterizing flow curves is complex and expensive, which is why the prediction of flow-curves by computer modelling becomes increasingly important. In the present work, we introduce a state parameter based model that is capable of predicting the flow curves of an A6061 aluminium alloy in different heat-treatment conditions. The model is implemented in the thermo-kinetic software package MatCalc and takes into account precipitation kinetics, subgrain formation, dynamic recovery by spontaneous annihilation and dislocation climb. To validate the simulation results, a series of compression tests is performed on the thermo-mechanical simulator Gleeble 1500.
Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti)more » by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.« less
Computer-aided applications of nanoscale smart materials for biomedical applications.
Rakesh, L; Howell, B A; Chai, M; Mueller, A; Kujawski, M; Fan, D; Ravi, S; Slominski, C
2008-10-01
Nanotechnology has the potential to impact the treatment of many diseases that currently plague society: cancer, AIDS, dementia of various kinds and so on. Nanoscale smart materials, such as carbon nanotubes, C(60), dendrimers and cyclodextrins, hold great promise for use in the development of better diagnostics, drug delivery and the alteration of biological function. Although experimentation is being used to explore the potential offered by these materials, it is by its very nature expensive in terms of time, resources and expertise. Insight with respect to the behavior of these materials in the presence of biological entities can be obtained much more rapidly by molecular dynamics simulation. Furthermore, the results of simulation may be used to guide experimentation so that it is much more productive than it might be in the absence of such information. The interactions of several nanoscale structures with biological macromolecules can already be probed effectively using molecular dynamics simulation. The results obtained should form the basis for significant new developments in the treatment of disease.
Incorporation of a two metre long PET scanner in STIR
NASA Astrophysics Data System (ADS)
Tsoumpas, C.; Brain, C.; Dyke, T.; Gold, D.
2015-09-01
The Explorer project aims to investigate the potential benefits of a total-body 2 metre long PET scanner. The following investigation incorporates this scanner in STIR library and demonstrates the capabilities and weaknesses of existing reconstruction (FBP and OSEM) and single scatter simulation algorithms. It was found that sensible images are reconstructed but at the expense of high memory and processing time demands. FBP requires 4 hours on a core; OSEM: 2 hours per iteration if ran in parallel on 15-cores of a high performance computer. The single scatter simulation algorithm shows that on a short scale, up to a fifth of the scanner length, the assumption that the scatter between direct rings is similar to the scatter between the oblique rings is approximately valid. However, for more extreme cases this assumption is not longer valid, which illustrates that consideration of the oblique rings within the single scatter simulation will be necessary, if this scatter correction is the method of choice.
Hydrocode and Molecular Dynamics modelling of uniaxial shock wave experiments on Silicon
NASA Astrophysics Data System (ADS)
Stubley, Paul; McGonegle, David; Patel, Shamim; Suggit, Matthew; Wark, Justin; Higginbotham, Andrew; Comley, Andrew; Foster, John; Rothman, Steve; Eggert, Jon; Kalantar, Dan; Smith, Ray
2015-06-01
Recent experiments have provided further evidence that the response of silicon to shock compression has anomalous properties, not described by the usual two-wave elastic-plastic response. A recent experimental campaign on the Orion laser in particular has indicated a complex multi-wave response. While Molecular Dynamics (MD) simulations can offer a detailed insight into the response of crystals to uniaxial compression, they are extremely computationally expensive. For this reason, we are adapting a simple quasi-2D hydrodynamics code to capture phase change under uniaxial compression, and the intervening mixed phase region, keeping track of the stresses and strains in each of the phases. This strain information is of such importance because a large number of shock experiments use diffraction as a key diagnostic, and these diffraction patterns depend solely on the elastic strains in the sample. We present here a comparison of the new hydrodynamics code with MD simulations, and show that the simulated diffraction taken from the code agrees qualitatively with measured diffraction from our recent Orion campaign.
The impact of population aging on medical expenses: A big data study based on the life table.
Wang, Changying; Li, Fen; Wang, Linan; Zhou, Wentao; Zhu, Bifan; Zhang, Xiaoxi; Ding, Lingling; He, Zhimin; Song, Peipei; Jin, Chunlin
2018-01-09
This study shed light on the amount and structure of utilization and medical expenses on Shanghai permanent residents based on big data, simulated lifetime medical expenses through combining of expenses data and life table model, and explored the dynamic pattern of aging on medical expenditures. 5 years were taken as the class interval, the study collected and did the descriptive analysis on the medical services utilization and medical expenses information for all ages of Shanghai permanent residents in 2015, simulated lifetime medical expenses by using current life table and cross-section expenditure data. The results showed that in 2015, outpatient and emergency visits per capita in the elderly group (aged 60 and over) was 4.1 and 4.5 times higher than the childhood group (aged 1-14), and the youth and adult group (aged 15-59); hospitalization per capita in the elderly group was 3.0 and 3.5 times higher than the childhood group, and the youth and adult group. People survived in the 60-64 years group, their expected whole medical expenses (105,447 purchasing power parity Dollar) in the rest of their lives accounted for 75.6% of their lifetime. A similar study in Michigan, US showed that the expenses of the population aged 65 and over accounted for 1/2 of lifetime medical expenses, which is much lower than Shanghai. The medical expenses of the advanced elderly group (aged 80 and over) accounted for 38.8% of their lifetime expenses, including 38.2% in outpatient and emergency, and 39.5% in hospitalization, which was slightly higher than outpatient and emergency. There is room to economize in medical expenditures of the elderly people in Shanghai, especially controlling hospitalization expenses is the key to saving medical expenses of elderly people aged over 80 and over.
Using quantum chemistry muscle to flex massive systems: How to respond to something perturbing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertoni, Colleen
Computational chemistry uses the theoretical advances of quantum mechanics and the algorithmic and hardware advances of computer science to give insight into chemical problems. It is currently possible to do highly accurate quantum chemistry calculations, but the most accurate methods are very computationally expensive. Thus it is only feasible to do highly accurate calculations on small molecules, since typically more computationally efficient methods are also less accurate. The overall goal of my dissertation work has been to try to decrease the computational expense of calculations without decreasing the accuracy. In particular, my dissertation work focuses on fragmentation methods, intermolecular interactionsmore » methods, analytic gradients, and taking advantage of new hardware.« less
NASA Astrophysics Data System (ADS)
Pitton, Giuseppe; Quaini, Annalisa; Rozza, Gianluigi
2017-09-01
We focus on reducing the computational costs associated with the hydrodynamic stability of solutions of the incompressible Navier-Stokes equations for a Newtonian and viscous fluid in contraction-expansion channels. In particular, we are interested in studying steady bifurcations, occurring when non-unique stable solutions appear as physical and/or geometric control parameters are varied. The formulation of the stability problem requires solving an eigenvalue problem for a partial differential operator. An alternative to this approach is the direct simulation of the flow to characterize the asymptotic behavior of the solution. Both approaches can be extremely expensive in terms of computational time. We propose to apply Reduced Order Modeling (ROM) techniques to reduce the demanding computational costs associated with the detection of a type of steady bifurcations in fluid dynamics. The application that motivated the present study is the onset of asymmetries (i.e., symmetry breaking bifurcation) in blood flow through a regurgitant mitral valve, depending on the Reynolds number and the regurgitant mitral valve orifice shape.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
Time-Shifted Boundary Conditions Used for Navier-Stokes Aeroelastic Solver
NASA Technical Reports Server (NTRS)
Srivastava, Rakesh
1999-01-01
Under the Advanced Subsonic Technology (AST) Program, an aeroelastic analysis code (TURBO-AE) based on Navier-Stokes equations is currently under development at NASA Lewis Research Center s Machine Dynamics Branch. For a blade row, aeroelastic instability can occur in any of the possible interblade phase angles (IBPA s). Analyzing small IBPA s is very computationally expensive because a large number of blade passages must be simulated. To reduce the computational cost of these analyses, we used time shifted, or phase-lagged, boundary conditions in the TURBO-AE code. These conditions can be used to reduce the computational domain to a single blade passage by requiring the boundary conditions across the passage to be lagged depending on the IBPA being analyzed. The time-shifted boundary conditions currently implemented are based on the direct-store method. This method requires large amounts of data to be stored over a period of the oscillation cycle. On CRAY computers this is not a major problem because solid-state devices can be used for fast input and output to read and write the data onto a disk instead of storing it in core memory.
Conway, J; Sharkey, R
2002-10-01
The Faculty of Nursing, University of Newcastle, Australia, has been keen to initiate strategies that enhance student learning and nursing practice. Two strategies are problem based learning (PBL) and clinical practice. The Faculty has maintained a comparatively high proportion of the undergraduate hours in the clinical setting in times when financial constraints suggest that simulations and on campus laboratory experiences may be less expensive.Increasingly, computer based technologies are becoming sufficiently refined to support the exploration of nursing practice in a non-traditional lecture/tutorial environment. In 1998, a group of faculty members proposed that computer mediated instruction would provide an opportunity for partnership between students, academics and clinicians that would promote more positive outcomes for all and maintain the integrity of the PBL approach. This paper discusses the similarities between problem based and practice based learning and presents the findings of an evaluative study of the implementation of a practice based learning model that uses computer mediated communication to promote integration of practice experiences with the broader goals of the undergraduate curriculum.
Code of Federal Regulations, 2014 CFR
2014-10-01
... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...
Code of Federal Regulations, 2013 CFR
2013-10-01
... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...
Code of Federal Regulations, 2012 CFR
2012-10-01
... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...
RuleMonkey: software for stochastic simulation of rule-based models
2010-01-01
Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application http://public.tgen.org/rulemonkey. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models. PMID:20673321
Study of hypervelocity projectile impact on thick metal plates
Roy, Shawoon K.; Trabia, Mohamed; O’Toole, Brendan; ...
2016-01-01
Hypervelocity impacts generate extreme pressure and shock waves in impacted targets that undergo severe localized deformation within a few microseconds. These impact experiments pose unique challenges in terms of obtaining accurate measurements. Similarly, simulating these experiments is not straightforward. This paper proposed an approach to experimentally measure the velocity of the back surface of an A36 steel plate impacted by a projectile. All experiments used a combination of a two-stage light-gas gun and the photonic Doppler velocimetry (PDV) technique. The experimental data were used to benchmark and verify computational studies. Two different finite-element methods were used to simulate the experiments:more » Lagrangian-based smooth particle hydrodynamics (SPH) and Eulerian-based hydrocode. Both codes used the Johnson-Cook material model and the Mie-Grüneisen equation of state. Experiments and simulations were compared based on the physical damage area and the back surface velocity. Finally, the results of this study showed that the proposed simulation approaches could be used to reduce the need for expensive experiments.« less
A new framework for the analysis of continental-scale convection-resolving climate simulations
NASA Astrophysics Data System (ADS)
Leutwyler, D.; Charpilloz, C.; Arteaga, A.; Ban, N.; Di Girolamo, S.; Fuhrer, O.; Hoefler, T.; Schulthess, T. C.; Christoph, S.
2017-12-01
High-resolution climate simulations at horizontal resolution of O(1-4 km) allow explicit treatment of deep convection (thunderstorms and rain showers). Explicitly treating convection by the governing equations reduces uncertainties associated with parametrization schemes and allows a model formulation closer to physical first principles [1,2]. But kilometer-scale climate simulations with long integration periods and large computational domains are expensive and data storage becomes unbearably voluminous. Hence new approaches to perform analysis are required. In the crCLIM project we propose a new climate modeling framework that allows scientists to conduct analysis at high spatial and temporal resolution. We tackle the computational cost by using the largest available supercomputers such as hybrid CPU-GPU architectures. For this the COSMO model has been adapted to run on such architectures [2]. We then alleviate the I/O-bottleneck by employing a simulation data-virtualizer (SDaVi) that allows to trade-off storage (space) for computational effort (time). This is achieved by caching the simulation outputs and efficiently launching re-simulations in case of cache misses. All this is done transparently from the analysis applications [3]. For the re-runs this approach requires a bit-reproducible version of COSMO. That is to say a model that produces identical results on different architectures to ensure coherent recomputation of the requested data [4]. In this contribution we present a version of SDaVi, a first performance model, and a strategy to obtain bit-reproducibility across hardware architectures.[1] N. Ban, J. Schmidli, C. Schär. Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations. J. Geophys. Res. Atmos., 7889-7907, 2014.[2] D. Leutwyler, O. Fuhrer, X. Lapillonne, D. Lüthi, C. Schär. Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19. Geosci. Model Dev, 3393-3412, 2016.[3] S. Di Girolamo, P. Schmid, T. Schulthess, T. Hoefler. Virtualized Big Data: Reproducing Simulation Output on Demand. Submit. to the 23rd ACM Symposium on PPoPP 18, Vienna, Austria.[4] A. Arteaga, O. Fuhrer, T. Hoefler. Designing Bit-Reproducible Portable High-Performance Applications. IEEE 28th IPDPS, 2014.
A First Look at the Upcoming SISO Space Reference FOM
NASA Technical Reports Server (NTRS)
Mueller, Bjorn; Crues, Edwin Z.; Dexter, Dan; Garro, Alfredo; Skuratovskiy, Anton; Vankov, Alexander
2016-01-01
Spaceflight is difficult, dangerous and expensive; human spaceflight even more so. In order to mitigate some of the danger and expense, professionals in the space domain have relied, and continue to rely, on computer simulation. Simulation is used at every level including concept, design, analysis, construction, testing, training and ultimately flight. As space systems have grown more complex, new simulation technologies have been developed, adopted and applied. Distributed simulation is one those technologies. Distributed simulation provides a base technology for segmenting these complex space systems into smaller, and usually simpler, component systems or subsystems. This segmentation also supports the separation of responsibilities between participating organizations. This segmentation is particularly useful for complex space systems like the International Space Station (ISS), which is composed of many elements from many nations along with visiting vehicles from many nations. This is likely to be the case for future human space exploration activities. Over the years, a number of distributed simulations have been built within the space domain. While many use the High Level Architecture (HLA) to provide the infrastructure for interoperability, HLA without a Federation Object Model (FOM) is insufficient by itself to insure interoperability. As a result, the Simulation Interoperability Standards Organization (SISO) is developing a Space Reference FOM. The Space Reference FOM Product Development Group is composed of members from several countries. They contribute experiences from projects within NASA, ESA and other organizations and represent government, academia and industry. The initial version of the Space Reference FOM is focusing on time and space and will provide the following: (i) a flexible positioning system using reference frames for arbitrary bodies in space, (ii) a naming conventions for well-known reference frames, (iii) definitions of common time scales, (iv) federation agreements for common types of time management with focus on time stepped simulation, and (v) support for physical entities, such as space vehicles and astronauts. The Space Reference FOM is expected to make collaboration politically, contractually and technically easier. It is also expected to make collaboration easier to manage and extend.
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations
Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L.; Grubmüller, Helmut
2015-01-01
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26238484
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations.
Kutzner, Carsten; Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L; Grubmüller, Helmut
2015-10-05
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Northrop, Paul W. C.; Pathak, Manan; Rife, Derek; ...
2015-03-09
Lithium-ion batteries are an important technology to facilitate efficient energy storage and enable a shift from petroleum based energy to more environmentally benign sources. Such systems can be utilized most efficiently if good understanding of performance can be achieved for a range of operating conditions. Mathematical models can be useful to predict battery behavior to allow for optimization of design and control. An analytical solution is ideally preferred to solve the equations of a mathematical model, as it eliminates the error that arises when using numerical techniques and is usually computationally cheap. An analytical solution provides insight into the behaviormore » of the system and also explicitly shows the effects of different parameters on the behavior. However, most engineering models, including the majority of battery models, cannot be solved analytically due to non-linearities in the equations and state dependent transport and kinetic parameters. The numerical method used to solve the system of equations describing a battery operation can have a significant impact on the computational cost of the simulation. In this paper, a model reformulation of the porous electrode pseudo three dimensional (P3D) which significantly reduces the computational cost of lithium ion battery simulation, while maintaining high accuracy, is discussed. This reformulation enables the use of the P3D model into applications that would otherwise be too computationally expensive to justify its use, such as online control, optimization, and parameter estimation. Furthermore, the P3D model has proven to be robust enough to allow for the inclusion of additional physical phenomena as understanding improves. In this study, the reformulated model is used to allow for more complicated physical phenomena to be considered for study, including thermal effects.« less
Development of Comprehensive Reduced Kinetic Models for Supersonic Reacting Shear Layer Simulations
NASA Technical Reports Server (NTRS)
Zambon, A. C.; Chelliah, H. K.; Drummond, J. P.
2006-01-01
Large-scale simulations of multi-dimensional unsteady turbulent reacting flows with detailed chemistry and transport can be computationally extremely intensive even on distributed computing architectures. With the development of suitable reduced chemical kinetic models, the number of scalar variables to be integrated can be decreased, leading to a significant reduction in the computational time required for the simulation with limited loss of accuracy in the results. A general MATLAB-based automated mechanism reduction procedure is presented to reduce any complex starting mechanism (detailed or skeletal) with minimal human intervention. Based on the application of the quasi steady-state (QSS) approximation for certain chemical species and on the elimination of the fast reaction rates in the mechanism, several comprehensive reduced models, capable of handling different fuels such as C2H4, CH4 and H2, have been developed and thoroughly tested for several combustion problems (ignition, propagation and extinction) and physical conditions (reactant compositions, temperatures, and pressures). A key feature of the present reduction procedure is the explicit solution of the concentrations of the QSS species, needed for the evaluation of the elementary reaction rates. In contrast, previous approaches relied on an implicit solution due to the strong coupling between QSS species, requiring computationally expensive inner iterations. A novel algorithm, based on the definition of a QSS species coupling matrix, is presented to (i) introduce appropriate truncations to the QSS algebraic relations and (ii) identify the optimal sequence for the explicit solution of the concentration of the QSS species. With the automatic generation of the relevant source code, the resulting reduced models can be readily implemented into numerical codes.
SAChES: Scalable Adaptive Chain-Ensemble Sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah
We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less
Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization
Xi, Maolong; Lu, Dan; Gui, Dongwei; ...
2016-11-27
Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less
Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization
NASA Astrophysics Data System (ADS)
Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan
2017-01-01
Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.
Integration of Extended MHD and Kinetic Effects in Global Magnetosphere Models
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Wang, L.; Maynard, K. R. M.; Raeder, J.; Bhattacharjee, A.
2015-12-01
Computational models of Earth's geospace environment are an important tool to investigate the science of the coupled solar-wind -- magnetosphere -- ionosphere system, complementing satellite and ground observations with a global perspective. They are also crucial in understanding and predicting space weather, in particular under extreme conditions. Traditionally, global models have employed the one-fluid MHD approximation, which captures large-scale dynamics quite well. However, in Earth's nearly collisionless plasma environment it breaks down on small scales, where ion and electron dynamics and kinetic effects become important, and greatly change the reconnection dynamics. A number of approaches have recently been taken to advance global modeling, e.g., including multiple ion species, adding Hall physics in a Generalized Ohm's Law, embedding local PIC simulations into a larger fluid domain and also some work on simulating the entire system with hybrid or fully kinetic models, the latter however being to computationally expensive to be run at realistic parameters. We will present an alternate approach, ie., a multi-fluid moment model that is derived rigorously from the Vlasov-Maxwell system. The advantage is that the computational cost remains managable, as we are still solving fluid equations. While the evolution equation for each moment is exact, it depends on the next higher-order moment, so that truncating the hiearchy and closing the system to capture the essential kinetic physics is crucial. We implement 5-moment (density, momentum, scalar pressure) and 10-moment (includes pressure tensor) versions of the model, and use local approximations for the heat flux to close the system. We test these closures by local simulations where we can compare directly to PIC / hybrid codes, and employ them in global simulations using the next-generation OpenGGCM to contrast them to MHD / Hall-MHD results and compare with observations.
Reducing the computational footprint for real-time BCPNN learning
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. PMID:25657618
Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xi, Maolong; Lu, Dan; Gui, Dongwei
Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less
Numerical Simulations of Hypersonic Boundary Layer Transition
NASA Astrophysics Data System (ADS)
Bartkowicz, Matthew David
Numerical schemes for supersonic flows tend to use large amounts of artificial viscosity for stability. This tends to damp out the small scale structures in the flow. Recently some low-dissipation methods have been proposed which selectively eliminate the artificial viscosity in regions which do not require it. This work builds upon the low-dissipation method of Subbareddy and Candler which uses the flux vector splitting method of Steger and Warming but identifies the dissipation portion to eliminate it. Computing accurate fluxes typically relies on large grid stencils or coupled linear systems that become computationally expensive to solve. Unstructured grids allow for CFD solutions to be obtained on complex geometries, unfortunately, it then becomes difficult to create a large stencil or the coupled linear system. Accurate solutions require grids that quickly become too large to be feasible. In this thesis a method is proposed to obtain more accurate solutions using relatively local data, making it suitable for unstructured grids composed of hexahedral elements. Fluxes are reconstructed using local gradients to extend the range of data used. The method is then validated on several test problems. Simulations of boundary layer transition are then performed. An elliptic cone at Mach 8 is simulated based on an experiment at the Princeton Gasdynamics Laboratory. A simulated acoustic noise boundary condition is imposed to model the noisy conditions of the wind tunnel and the transitioning boundary layer observed. A computation of an isolated roughness element is done based on an experiment in Purdue's Mach 6 quiet wind tunnel. The mechanism for transition is identified as an instability in the upstream separation region and a comparison is made to experimental data. In the CFD a fully turbulent boundary layer is observed downstream.
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
Reducing the computational footprint for real-time BCPNN learning.
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
Software for Acoustic Rendering
NASA Technical Reports Server (NTRS)
Miller, Joel D.
2003-01-01
SLAB is a software system that can be run on a personal computer to simulate an acoustic environment in real time. SLAB was developed to enable computational experimentation in which one can exert low-level control over a variety of signal-processing parameters, related to spatialization, for conducting psychoacoustic studies. Among the parameters that can be manipulated are the number and position of reflections, the fidelity (that is, the number of taps in finite-impulse-response filters), the system latency, and the update rate of the filters. Another goal in the development of SLAB was to provide an inexpensive means of dynamic synthesis of virtual audio over headphones, without need for special-purpose signal-processing hardware. SLAB has a modular, object-oriented design that affords the flexibility and extensibility needed to accommodate a variety of computational experiments and signal-flow structures. SLAB s spatial renderer has a fixed signal-flow architecture corresponding to a set of parallel signal paths from each source to a listener. This fixed architecture can be regarded as a compromise that optimizes efficiency at the expense of complete flexibility. Such a compromise is necessary, given the design goal of enabling computational psychoacoustic experimentation on inexpensive personal computers.
Toward an in-situ analytics and diagnostics framework for earth system models
NASA Astrophysics Data System (ADS)
Anantharaj, Valentine; Wolf, Matthew; Rasch, Philip; Klasky, Scott; Williams, Dean; Jacob, Rob; Ma, Po-Lun; Kuo, Kwo-Sen
2017-04-01
The development roadmaps for many earth system models (ESM) aim for a globally cloud-resolving model targeting the pre-exascale and exascale systems of the future. The ESMs will also incorporate more complex physics, chemistry and biology - thereby vastly increasing the fidelity of the information content simulated by the model. We will then be faced with an unprecedented volume of simulation output that would need to be processed and analyzed concurrently in order to derive the valuable scientific results. We are already at this threshold with our current generation of ESMs at higher resolution simulations. Currently, the nominal I/O throughput in the Community Earth System Model (CESM) via Parallel IO (PIO) library is around 100 MB/s. If we look at the high frequency I/O requirements, it would require an additional 1 GB / simulated hour, translating to roughly 4 mins wallclock / simulated-day => 24.33 wallclock hours / simulated-model-year => 1,752,000 core-hours of charge per simulated-model-year on the Titan supercomputer at the Oak Ridge Leadership Computing Facility. There is also a pending need for 3X more volume of simulation output . Meanwhile, many ESMs use instrument simulators to run forward models to compare model simulations against satellite and ground-based instruments, such as radars and radiometers. The CFMIP Observation Simulator Package (COSP) is used in CESM as well as the Accelerated Climate Model for Energy (ACME), one of the ESMs specifically targeting current and emerging leadership-class computing platforms These simulators can be computationally expensive, accounting for as much as 30% of the computational cost. Hence the data are often written to output files that are then used for offline calculations. Again, the I/O bottleneck becomes a limitation. Detection and attribution studies also use large volume of data for pattern recognition and feature extraction to analyze weather and climate phenomenon such as tropical cyclones, atmospheric rivers, blizzards, etc. It is evident that ESMs need an in-situ framework to decouple the diagnostics and analytics from the prognostics and physics computations of the models so that the diagnostic computations could be performed concurrently without limiting model throughput. We are designing a science-driven online analytics framework for earth system models. Our approach is to adopt several data workflow technologies, such as the Adaptable IO System (ADIOS), being developed under the U.S. Exascale Computing Project (ECP) and integrate these to allow for extreme performance IO, in situ workflow integration, science-driven analytics and visualization all in a easy to use computational framework. This will allow science teams to write data 100-1000 times faster and seamlessly move from post processing the output for validation and verification purposes to performing these calculations in situ. We can easily and knowledgeably envision a near-term future where earth system models like ACME and CESM will have to address not only the challenges of the volume of data but also need to consider the velocity of the data. The earth system model of the future in the exascale era, as they incorporate more complex physics at higher resolutions, will be able to analyze more simulation content without having to compromise targeted model throughput.
Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.
Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg
2009-07-01
In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.
NASA Astrophysics Data System (ADS)
Hamza, Karim; Shalaby, Mohamed
2014-09-01
This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Enhanced Wang Landau sampling of adsorbed protein conformations.
Radhakrishna, Mithun; Sharma, Sumit; Kumar, Sanat K
2012-03-21
Using computer simulations to model the folding of proteins into their native states is computationally expensive due to the extraordinarily low degeneracy of the ground state. In this paper, we develop an efficient way to sample these folded conformations using Wang Landau sampling coupled with the configurational bias method (which uses an unphysical "temperature" that lies between the collapse and folding transition temperatures of the protein). This method speeds up the folding process by roughly an order of magnitude over existing algorithms for the sequences studied. We apply this method to study the adsorption of intrinsically disordered hydrophobic polar protein fragments on a hydrophobic surface. We find that these fragments, which are unstructured in the bulk, acquire secondary structure upon adsorption onto a strong hydrophobic surface. Apparently, the presence of a hydrophobic surface allows these random coil fragments to fold by providing hydrophobic contacts that were lost in protein fragmentation. © 2012 American Institute of Physics