Multidimensional generalized-ensemble algorithms for complex systems.
Mitsutake, Ayori; Okamoto, Yuko
2009-06-07
We give general formulations of the multidimensional multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential energy function E(0) by adding any physical quantity V of interest as a new energy term. These multidimensional generalized-ensemble algorithms then perform a random walk not only in E(0) space but also in V space. Among the three algorithms, the replica-exchange method is the easiest to perform because the weight factor is just a product of regular Boltzmann-like factors, while the weight factors for the multicanonical algorithm and simulated tempering are not a priori known. We give a simple procedure for obtaining the weight factors for these two latter algorithms, which uses a short replica-exchange simulation and the multiple-histogram reweighting techniques. As an example of applications of these algorithms, we have performed a two-dimensional replica-exchange simulation and a two-dimensional simulated-tempering simulation using an alpha-helical peptide system. From these simulations, we study the helix-coil transitions of the peptide in gas phase and in aqueous solution.
Mori, Yoshiharu; Okumura, Hisashi
2015-12-05
Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm. © 2015 Wiley Periodicals, Inc.
Efficient Simulation of Explicitly Solvated Proteins in the Well-Tempered Ensemble.
Deighan, Michael; Bonomi, Massimiliano; Pfaendtner, Jim
2012-07-10
Herein, we report significant reduction in the cost of combined parallel tempering and metadynamics simulations (PTMetaD). The efficiency boost is achieved using the recently proposed well-tempered ensemble (WTE) algorithm. We studied the convergence of PTMetaD-WTE conformational sampling and free energy reconstruction of an explicitly solvated 20-residue tryptophan-cage protein (trp-cage). A set of PTMetaD-WTE simulations was compared to a corresponding standard PTMetaD simulation. The properties of PTMetaD-WTE and the convergence of the calculations were compared. The roles of the number of replicas, total simulation time, and adjustable WTE parameter γ were studied.
Diffusion control for a tempered anomalous diffusion system using fractional-order PI controllers.
Juan Chen; Zhuang, Bo; Chen, YangQuan; Cui, Baotong
2017-05-09
This paper is concerned with diffusion control problem of a tempered anomalous diffusion system based on fractional-order PI controllers. The contribution of this paper is to introduce fractional-order PI controllers into the tempered anomalous diffusion system for mobile actuators motion and spraying control. For the proposed control force, convergence analysis of the system described by mobile actuator dynamical equations is presented based on Lyapunov stability arguments. Moreover, a new Centroidal Voronoi Tessellation (CVT) algorithm based on fractional-order PI controllers, henceforth called FOPI-based CVT algorithm, is provided together with a modified simulation platform called Fractional-Order Diffusion Mobile Actuator-Sensor 2-Dimension Fractional-Order Proportional Integral (FO-Diff-MAS2D-FOPI). Finally, extensive numerical simulations for the tempered anomalous diffusion process are presented to verify the effectiveness of our proposed fractional-order PI controllers. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shen, Lin; Xie, Liangxu; Yang, Mingjun
2017-04-01
Conformational sampling under rugged energy landscape is always a challenge in computer simulations. The recently developed integrated tempering sampling, together with its selective variant (SITS), emerges to be a powerful tool in exploring the free energy landscape or functional motions of various systems. The estimation of weighting factors constitutes a critical step in these methods and requires accurate calculation of partition function ratio between different thermodynamic states. In this work, we propose a new adaptive update algorithm to compute the weighting factors based on the weighted histogram analysis method (WHAM). The adaptive-WHAM algorithm with SITS is then applied to study the thermodynamic properties of several representative peptide systems solvated in an explicit water box. The performance of the new algorithm is validated in simulations of these solvated peptide systems. We anticipate more applications of this coupled optimisation and production algorithm to other complicated systems such as the biochemical reactions in solution.
Rauscher, Sarah; Neale, Chris; Pomès, Régis
2009-10-13
Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.
NASA Astrophysics Data System (ADS)
Jo, Sunhwan; Jiang, Wei
2015-12-01
Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.
Linking Well-Tempered Metadynamics Simulations with Experiments
Barducci, Alessandro; Bonomi, Massimiliano; Parrinello, Michele
2010-01-01
Abstract Linking experiments with the atomistic resolution provided by molecular dynamics simulations can shed light on the structure and dynamics of protein-disordered states. The sampling limitations of classical molecular dynamics can be overcome using metadynamics, which is based on the introduction of a history-dependent bias on a small number of suitably chosen collective variables. Even if such bias distorts the probability distribution of the other degrees of freedom, the equilibrium Boltzmann distribution can be reconstructed using a recently developed reweighting algorithm. Quantitative comparison with experimental data is thus possible. Here we show the potential of this combined approach by characterizing the conformational ensemble explored by a 13-residue helix-forming peptide by means of a well-tempered metadynamics/parallel tempering approach and comparing the reconstructed nuclear magnetic resonance scalar couplings with experimental data. PMID:20441734
Reconstructing the equilibrium Boltzmann distribution from well-tempered metadynamics.
Bonomi, M; Barducci, A; Parrinello, M
2009-08-01
Metadynamics is a widely used and successful method for reconstructing the free-energy surface of complex systems as a function of a small number of suitably chosen collective variables. This is achieved by biasing the dynamics of the system. The bias acting on the collective variables distorts the probability distribution of the other variables. Here we present a simple reweighting algorithm for recovering the unbiased probability distribution of any variable from a well-tempered metadynamics simulation. We show the efficiency of the reweighting procedure by reconstructing the distribution of the four backbone dihedral angles of alanine dipeptide from two and even one dimensional metadynamics simulation. 2009 Wiley Periodicals, Inc.
A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data.
Liang, Faming; Kim, Jinsu; Song, Qifan
2016-01-01
Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively.
A Bootstrap Metropolis–Hastings Algorithm for Bayesian Analysis of Big Data
Kim, Jinsu; Song, Qifan
2016-01-01
Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively. PMID:29033469
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Linking well-tempered metadynamics simulations with experiments.
Barducci, Alessandro; Bonomi, Massimiliano; Parrinello, Michele
2010-05-19
Linking experiments with the atomistic resolution provided by molecular dynamics simulations can shed light on the structure and dynamics of protein-disordered states. The sampling limitations of classical molecular dynamics can be overcome using metadynamics, which is based on the introduction of a history-dependent bias on a small number of suitably chosen collective variables. Even if such bias distorts the probability distribution of the other degrees of freedom, the equilibrium Boltzmann distribution can be reconstructed using a recently developed reweighting algorithm. Quantitative comparison with experimental data is thus possible. Here we show the potential of this combined approach by characterizing the conformational ensemble explored by a 13-residue helix-forming peptide by means of a well-tempered metadynamics/parallel tempering approach and comparing the reconstructed nuclear magnetic resonance scalar couplings with experimental data. Copyright (c) 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Hyper-Parallel Tempering Monte Carlo Method and It's Applications
NASA Astrophysics Data System (ADS)
Yan, Qiliang; de Pablo, Juan
2000-03-01
A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.
NASA Astrophysics Data System (ADS)
Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.
2017-10-01
We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.
Ilott, Andrew J; Palucha, Sebastian; Hodgkinson, Paul; Wilson, Mark R
2013-10-10
The well-tempered, smoothly converging form of the metadynamics algorithm has been implemented in classical molecular dynamics simulations and used to obtain an estimate of the free energy surface explored by the molecular rotations in the plastic crystal, octafluoronaphthalene. The biased simulations explore the full energy surface extremely efficiently, more than 4 orders of magnitude faster than unbiased molecular dynamics runs. The metadynamics collective variables used have also been expanded to include the simultaneous orientations of three neighboring octafluoronaphthalene molecules. Analysis of the resultant three-dimensional free energy surface, which is sampled to a very high degree despite its significant complexity, demonstrates that there are strong correlations between the molecular orientations. Although this correlated motion is of limited applicability in terms of exploiting dynamical motion in octafluoronaphthalene, the approach used is extremely well suited to the investigation of the function of crystalline molecular machines.
Study of the temperature configuration of parallel tempering for the traveling salesman problem
NASA Astrophysics Data System (ADS)
Hasegawa, Manabu
The effective temperature configuration of parallel tempering (PT) in finite-time optimization is studied for the solution of the traveling salesman problem. An experimental analysis is conducted to decide the relative importance of the two characteristic temperatures, the specific-heat-peak temperature referred to in the general guidelines and the effective intermediate temperature identified in the recent study on simulated annealing (SA). The results show that the operation near the former has no notable significance contrary to the conventional belief but that the operation near the latter plays a crucial role in fulfilling the optimization function of PT. The method shares the same origin of effectiveness with the SA and SA-related algorithms.
Mori, Yoshiharu; Okamoto, Yuko
2013-02-01
A simulated tempering method, which is referred to as simulated-tempering umbrella sampling, for calculating the free energy of chemical reactions is proposed. First principles molecular dynamics simulations with this simulated tempering were performed to study the intramolecular proton transfer reaction of malonaldehyde in an aqueous solution. Conformational sampling in reaction coordinate space can be easily enhanced with this method, and the free energy along a reaction coordinate can be calculated accurately. Moreover, the simulated-tempering umbrella sampling provides trajectory data more efficiently than the conventional umbrella sampling method.
Algorithm theoretical basis for GEDI level-4A footprint above ground biomass density.
NASA Astrophysics Data System (ADS)
Kellner, J. R.; Armston, J.; Blair, J. B.; Duncanson, L.; Hancock, S.; Hofton, M. A.; Luthcke, S. B.; Marselis, S.; Tang, H.; Dubayah, R.
2017-12-01
The Global Ecosystem Dynamics Investigation is a NASA Earth-Venture-2 mission that will place a multi-beam waveform lidar instrument on the International Space Station. GEDI data will provide globally representative measurements of vertical height profiles (waveforms) and estimates of above ground carbon stocks throughout the planet's temperate and tropical regions. Here we describe the current algorithm theoretical basis for the L4A footprint above ground biomass data product. The L4A data product is above ground biomass density (AGBD, Mg · ha-1) at the scale of individual GEDI footprints (25 m diameter). Footprint AGBD is derived from statistical models that relate waveform height metrics to field-estimated above ground biomass. The field estimates are from long-term permanent plot inventories in which all free-standing woody plants greater than a diameter size threshold have been identified and mapped. We simulated GEDI waveforms from discrete-return airborne lidar data using the GEDI waveform simulator. We associated height metrics from simulated waveforms with field-estimated AGBD at 61 sites in temperate and tropical regions of North and South America, Europe, Africa, Asia and Australia. We evaluated the ability of empirical and physically-based regression and machine learning models to predict AGBD at the footprint level. Our analysis benchmarks the performance of these models in terms of site and region-specific accuracy and transferability using a globally comprehensive calibration and validation dataset.
Doll, J.; Dupuis, P.; Nyquist, P.
2017-02-08
Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less
NASA Astrophysics Data System (ADS)
Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui
2016-04-01
Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiao, Qin, E-mail: qqiao@ust.hk; Zhang, Hou-Dao; Huang, Xuhui, E-mail: xuhuihuang@ust.hk
2016-04-21
Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kineticsmore » are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.« less
Exploring the Energy Landscapes of Protein Folding Simulations with Bayesian Computation
Burkoff, Nikolas S.; Várnai, Csilla; Wells, Stephen A.; Wild, David L.
2012-01-01
Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. PMID:22385859
Exploring the energy landscapes of protein folding simulations with Bayesian computation.
Burkoff, Nikolas S; Várnai, Csilla; Wells, Stephen A; Wild, David L
2012-02-22
Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Distance between configurations in Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Fukuma, Masafumi; Matsumoto, Nobuyuki; Umeda, Naoya
2017-12-01
For a given Markov chain Monte Carlo algorithm we introduce a distance between two configurations that quantifies the difficulty of transition from one configuration to the other configuration. We argue that the distance takes a universal form for the class of algorithms which generate local moves in the configuration space. We explicitly calculate the distance for the Langevin algorithm, and show that it certainly has desired and expected properties as distance. We further show that the distance for a multimodal distribution gets dramatically reduced from a large value by the introduction of a tempering method. We also argue that, when the original distribution is highly multimodal with large number of degenerate vacua, an anti-de Sitter-like geometry naturally emerges in the extended configuration space.
Parallel tempering for the traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Percus, Allon; Wang, Richard; Hyman, Jeffrey
We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less
We analyzed 10 established and 4 new satellite reflectance algorithms for estimating chlorophyll-a (Chl-a) in a temperate reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense water truth collected within one hour of image acquisition to develop si...
A Comparison of Techniques for Scheduling Earth-Observing Satellites
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2004-01-01
Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.
Application of the DMRG in two dimensions: a parallel tempering algorithm
NASA Astrophysics Data System (ADS)
Hu, Shijie; Zhao, Jize; Zhang, Xuefeng; Eggert, Sebastian
The Density Matrix Renormalization Group (DMRG) is known to be a powerful algorithm for treating one-dimensional systems. When the DMRG is applied in two dimensions, however, the convergence becomes much less reliable and typically ''metastable states'' may appear, which are unfortunately quite robust even when keeping a very high number of DMRG states. To overcome this problem we have now successfully developed a parallel tempering DMRG algorithm. Similar to parallel tempering in quantum Monte Carlo, this algorithm allows the systematic switching of DMRG states between different model parameters, which is very efficient for solving convergence problems. Using this method we have figured out the phase diagram of the xxz model on the anisotropic triangular lattice which can be realized by hardcore bosons in optical lattices. SFB Transregio 49 of the Deutsche Forschungsgemeinschaft (DFG) and the Allianz fur Hochleistungsrechnen Rheinland-Pfalz (AHRP).
Modeling and Simulation of Quenching and Tempering Process in steels
NASA Astrophysics Data System (ADS)
Deng, Xiaohu; Ju, Dongying
Quenching and tempering (Q&T) is a combined heat treatment process to achieve maximum toughness and ductility at a specified hardness and strength. It is important to develop a mathematical model for quenching and tempering process for satisfy requirement of mechanical properties with low cost. This paper presents a modified model to predict structural evolution and hardness distribution during quenching and tempering process of steels. The model takes into account tempering parameters, carbon content, isothermal and non-isothermal transformations. Moreover, precipitation of transition carbides, decomposition of retained austenite and precipitation of cementite can be simulated respectively. Hardness distributions of quenched and tempered workpiece are predicted by experimental regression equation. In order to validate the model, it is employed to predict the tempering of 80MnCr5 steel. The predicted precipitation dynamics of transition carbides and cementite is consistent with the previous experimental and simulated results from literature. Then the model is implemented within the framework of the developed simulation code COSMAP to simulate microstructure, stress and distortion in the heat treated component. It is applied to simulate Q&T process of J55 steel. The calculated results show a good agreement with the experimental ones. This agreement indicates that the model is effective for simulation of Q&T process of steels.
NASA Astrophysics Data System (ADS)
Wang, Audrey; Price, David T.
2007-03-01
A simple integrated algorithm was developed to relate global climatology to distributions of tree plant functional types (PFT). Multivariate cluster analysis was performed to analyze the statistical homogeneity of the climate space occupied by individual tree PFTs. Forested regions identified from the satellite-based GLC2000 classification were separated into tropical, temperate, and boreal sub-PFTs for use in the Canadian Terrestrial Ecosystem Model (CTEM). Global data sets of monthly minimum temperature, growing degree days, an index of climatic moisture, and estimated PFT cover fractions were then used as variables in the cluster analysis. The statistical results for individual PFT clusters were found consistent with other global-scale classifications of dominant vegetation. As an improvement of the quantification of the climatic limitations on PFT distributions, the results also demonstrated overlapping of PFT cluster boundaries that reflected vegetation transitions, for example, between tropical and temperate biomes. The resulting global database should provide a better basis for simulating the interaction of climate change and terrestrial ecosystem dynamics using global vegetation models.
[Simulation on the seasonal growth patterns of grassland plant communities in northern China].
Zhang, Li; Zheng, Yuan-Run
2008-10-01
Soil moisture is the key factor limiting the productivity of grassland in northern China ranging from arid to subhumid arid regions. In this paper, the seasonal and annual growth, foliage projective cover (FPC), evaporative coefficient (k), and net primary productivity (NPP) of 7 types of grasslands in North China were simulated by using a simple model based on well established ecological processes of water balance and climatic data collected at 460 sites over 40 years. The observed NPPs were used to validate the model, and the simulated NPPs were in high agreement with the observed NPPs. The simulated k, NPP, and FPC deceased from east to west in temperate grasslands, and decreased from southeast to northwest in Qinghai-Tibet Plateau, reflecting the moisture gradient in northern China. Alpine meadow had the highest k, NPP, and FPC in the 7 types of grasslands, alpine steppe had the second highest FPC but with a NPP similar to that of temperate steppe, and the three simulated parameters of temperate desert were the smallest. The simulated results suggested that the livestock density should be lower than 5.2, 2.3, 3.6, 2.1, 1.0, 0.6, and 0.2 sheep unit x hm(-2), while the coverage of rehabilitated vegetation should be about 93%, 79%, 56%, 50%, 44%, 38%, and 37% in alpine meadow, alpine steppe, temperate meadow steppe, temperate steppe, temperate desert steppe, temperate steppe desert, and temperate desert, respectively.
Parallel tempering Monte Carlo simulations of lysozyme orientation on charged surfaces
NASA Astrophysics Data System (ADS)
Xie, Yun; Zhou, Jian; Jiang, Shaoyi
2010-02-01
In this work, the parallel tempering Monte Carlo (PTMC) algorithm is applied to accurately and efficiently identify the global-minimum-energy orientation of a protein adsorbed on a surface in a single simulation. When applying the PTMC method to simulate lysozyme orientation on charged surfaces, it is found that lysozyme could easily be adsorbed on negatively charged surfaces with "side-on" and "back-on" orientations. When driven by dominant electrostatic interactions, lysozyme tends to be adsorbed on negatively charged surfaces with the side-on orientation for which the active site of lysozyme faces sideways. The side-on orientation agrees well with the experimental results where the adsorbed orientation of lysozyme is determined by electrostatic interactions. As the contribution from van der Waals interactions gradually dominates, the back-on orientation becomes the preferred one. For this orientation, the active site of lysozyme faces outward, which conforms to the experimental results where the orientation of adsorbed lysozyme is co-determined by electrostatic interactions and van der Waals interactions. It is also found that despite of its net positive charge, lysozyme could be adsorbed on positively charged surfaces with both "end-on" and back-on orientations owing to the nonuniform charge distribution over lysozyme surface and the screening effect from ions in solution. The PTMC simulation method provides a way to determine the preferred orientation of proteins on surfaces for biosensor and biomaterial applications.
Well-Tempered Metadynamics: A Smoothly Converging and Tunable Free-Energy Method
NASA Astrophysics Data System (ADS)
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-01-01
We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.
Well-tempered metadynamics: a smoothly converging and tunable free-energy method.
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-01-18
We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.
Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina
2017-06-13
Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.
Free energy calculation of permeant-membrane interactions using molecular dynamics simulations.
Elvati, Paolo; Violi, Angela
2012-01-01
Nanotoxicology, the science concerned with the safe use of nanotechnology and nanostructure design for biological applications, is a field of research that has recently received great attention, as a result of the rapid growth in nanotechnology. Many nanostructures are of a scale and chemical composition similar to many biomolecular environments, and recent papers have reported evident toxicity of selected nanoparticles. Molecular simulations can help develop a mechanistic understanding of how structural properties affect bioactivity. In this chapter, we describe how to compute the free energy of interactions between cellular membranes and benzene, the main constituent of some toxic carbonaceous particles, with well-tempered metadynamics. This algorithm reconstructs the free energy surface and accelerates rare events in a coarse-grained representation of the system.
Well-tempered metadynamics: a smoothly-converging and tunable free-energy method
NASA Astrophysics Data System (ADS)
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-03-01
We present [1] a method for determining the free energy dependence on a selected number of order parameters using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevantregions of the order parameter space. The algorithm is tested on the reconstruction of alanine dipeptide free energy landscape. [1] A. Barducci, G. Bussi and M. Parrinello, Phys. Rev. Lett., accepted (2007).
Lu, Chao; Li, Xubin; Wu, Dongsheng; Zheng, Lianqing; Yang, Wei
2016-01-12
In aqueous solution, solute conformational transitions are governed by intimate interplays of the fluctuations of solute-solute, solute-water, and water-water interactions. To promote molecular fluctuations to enhance sampling of essential conformational changes, a common strategy is to construct an expanded Hamiltonian through a series of Hamiltonian perturbations and thereby broaden the distribution of certain interactions of focus. Due to a lack of active sampling of configuration response to Hamiltonian transitions, it is challenging for common expanded Hamiltonian methods to robustly explore solvent mediated rare conformational events. The orthogonal space sampling (OSS) scheme, as exemplified by the orthogonal space random walk and orthogonal space tempering methods, provides a general framework for synchronous acceleration of slow configuration responses. To more effectively sample conformational transitions in aqueous solution, in this work, we devised a generalized orthogonal space tempering (gOST) algorithm. Specifically, in the Hamiltonian perturbation part, a solvent-accessible-surface-area-dependent term is introduced to implicitly perturb near-solute water-water fluctuations; more importantly in the orthogonal space response part, the generalized force order parameter is generalized as a two-dimension order parameter set, in which essential solute-solvent and solute-solute components are separately treated. The gOST algorithm is evaluated through a molecular dynamics simulation study on the explicitly solvated deca-alanine (Ala10) peptide. On the basis of a fully automated sampling protocol, the gOST simulation enabled repetitive folding and unfolding of the solvated peptide within a single continuous trajectory and allowed for detailed constructions of Ala10 folding/unfolding free energy surfaces. The gOST result reveals that solvent cooperative fluctuations play a pivotal role in Ala10 folding/unfolding transitions. In addition, our assessment analysis suggests that because essential conformational events are mainly driven by the compensating fluctuations of essential solute-solvent and solute-solute interactions, commonly employed "predictive" sampling methods are unlikely to be effective on this seemingly "simple" system. The gOST development presented in this paper illustrates how to employ the OSS scheme for physics-based sampling method designs.
USDA-ARS?s Scientific Manuscript database
Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...
Chodera, John D; Shirts, Michael R
2011-11-21
The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.
Population Annealing Monte Carlo for Frustrated Systems
NASA Astrophysics Data System (ADS)
Amey, Christopher; Machta, Jonathan
Population annealing is a sequential Monte Carlo algorithm that efficiently simulates equilibrium systems with rough free energy landscapes such as spin glasses and glassy fluids. A large population of configurations is initially thermalized at high temperature and then cooled to low temperature according to an annealing schedule. The population is kept in thermal equilibrium at every annealing step via resampling configurations according to their Boltzmann weights. Population annealing is comparable to parallel tempering in terms of efficiency, but has several distinct and useful features. In this talk I will give an introduction to population annealing and present recent progress in understanding its equilibration properties and optimizing it for spin glasses. Results from large-scale population annealing simulations for the Ising spin glass in 3D and 4D will be presented. NSF Grant DMR-1507506.
Communication: Multiple atomistic force fields in a single enhanced sampling simulation
NASA Astrophysics Data System (ADS)
Hoang Viet, Man; Derreumaux, Philippe; Nguyen, Phuong H.
2015-07-01
The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.
Fuzzy Naive Bayesian model for medical diagnostic decision support.
Wagholikar, Kavishwar B; Vijayraghavan, Sundararajan; Deshpande, Ashok W
2009-01-01
This work relates to the development of computational algorithms to provide decision support to physicians. The authors propose a Fuzzy Naive Bayesian (FNB) model for medical diagnosis, which extends the Fuzzy Bayesian approach proposed by Okuda. A physician's interview based method is described to define a orthogonal fuzzy symptom information system, required to apply the model. For the purpose of elaboration and elicitation of characteristics, the algorithm is applied to a simple simulated dataset, and compared with conventional Naive Bayes (NB) approach. As a preliminary evaluation of FNB in real world scenario, the comparison is repeated on a real fuzzy dataset of 81 patients diagnosed with infectious diseases. The case study on simulated dataset elucidates that FNB can be optimal over NB for diagnosing patients with imprecise-fuzzy information, on account of the following characteristics - 1) it can model the information that, values of some attributes are semantically closer than values of other attributes, and 2) it offers a mechanism to temper exaggerations in patient information. Although the algorithm requires precise training data, its utility for fuzzy training data is argued for. This is supported by the case study on infectious disease dataset, which indicates optimality of FNB over NB for the infectious disease domain. Further case studies on large datasets are required to establish utility of FNB.
Multisystem altruistic metadynamics—Well-tempered variant
NASA Astrophysics Data System (ADS)
Hošek, Petr; Kříž, Pavel; Toulcová, Daniela; Spiwok, Vojtěch
2017-03-01
Metadynamics method has been widely used to enhance sampling in molecular simulations. Its original form suffers two major drawbacks, poor convergence in complex (especially biomolecular) systems and its serial nature. The first drawback has been addressed by introduction of a convergent variant known as well-tempered metadynamics. The second was addressed by introduction of a parallel multisystem metadynamics referred to as altruistic metadynamics. Here, we combine both approaches into well-tempered altruistic metadynamics. We provide mathematical arguments and trial simulations to show that it accurately predicts free energy surfaces.
Multisystem altruistic metadynamics-Well-tempered variant.
Hošek, Petr; Kříž, Pavel; Toulcová, Daniela; Spiwok, Vojtěch
2017-03-28
Metadynamics method has been widely used to enhance sampling in molecular simulations. Its original form suffers two major drawbacks, poor convergence in complex (especially biomolecular) systems and its serial nature. The first drawback has been addressed by introduction of a convergent variant known as well-tempered metadynamics. The second was addressed by introduction of a parallel multisystem metadynamics referred to as altruistic metadynamics. Here, we combine both approaches into well-tempered altruistic metadynamics. We provide mathematical arguments and trial simulations to show that it accurately predicts free energy surfaces.
Entropic stabilization of isolated beta-sheets.
Dugourd, Philippe; Antoine, Rodolphe; Breaux, Gary; Broyer, Michel; Jarrold, Martin F
2005-04-06
Temperature-dependent electric deflection measurements have been performed for a series of unsolvated alanine-based peptides (Ac-WA(n)-NH(2), where Ac = acetyl, W = tryptophan, A = alanine, and n = 3, 5, 10, 13, and 15). The measurements are interpreted using Monte Carlo simulations performed with a parallel tempering algorithm. Despite alanine's high helix propensity in solution, the results suggest that unsolvated Ac-WA(n)-NH(2) peptides with n > 10 adopt beta-sheet conformations at room temperature. Previous studies have shown that protonated alanine-based peptides adopt helical or globular conformations in the gas phase, depending on the location of the charge. Thus, the charge more than anything else controls the structure.
Population annealing with weighted averages: A Monte Carlo method for rough free-energy landscapes
NASA Astrophysics Data System (ADS)
Machta, J.
2010-08-01
The population annealing algorithm introduced by Hukushima and Iba is described. Population annealing combines simulated annealing and Boltzmann weighted differential reproduction within a population of replicas to sample equilibrium states. Population annealing gives direct access to the free energy. It is shown that unbiased measurements of observables can be obtained by weighted averages over many runs with weight factors related to the free-energy estimate from the run. Population annealing is well suited to parallelization and may be a useful alternative to parallel tempering for systems with rough free-energy landscapes such as spin glasses. The method is demonstrated for spin glasses.
Simulated Annealing in the Variable Landscape
NASA Astrophysics Data System (ADS)
Hasegawa, Manabu; Kim, Chang Ju
An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.
Temperature-dependent errors in nuclear lattice simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Dean; Thomson, Richard
2007-06-15
We study the temperature dependence of discretization errors in nuclear lattice simulations. We find that for systems with strong attractive interactions the predominant error arises from the breaking of Galilean invariance. We propose a local 'well-tempered' lattice action which eliminates much of this error. The well-tempered action can be readily implemented in lattice simulations for nuclear systems as well as cold atomic Fermi systems.
A New Paradigm to Identify Reaction Pathways in Gas-phase
2015-04-27
uses a history-dependent bias to favor the exploration of new states. Briefly, the well - tempered Metadynamics (WTM) technique was introduced to...Social PeRmutation INvarianT coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 WTM well - tempered Metadynamics ...overcome energetic barriers is not new [13], we used the basic algorithm that is use in Metadynamics (META) [14], an already well -tested method [15] that
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Clifford; School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030; Ji, Weixiao
2014-02-01
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm,more » which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.« less
On the fusion of tuning parameters of fuzzy rules and neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.
Chemineau, Philippe; Daveau, Agnès; Cognié, Yves; Aumont, Gilles; Chesneau, Didier
2004-08-27
Seasonality of ovulatory activity is observed in European sheep and goat breeds, whereas tropical breeds show almost continuous ovulatory activity. It is not known if these tropical breeds are sensitive or not to temperate photoperiod. This study was therefore designed to determine whether tropical Creole goats and Black-Belly ewes are sensitive to temperate photoperiod. Two groups of adult females in each species, either progeny or directly born from imported embryos, were used and maintained in light-proof rooms under simulated temperate (8 to 16 h of light per day) or tropical (11 - 13 h) photoperiods. Ovulatory activity was determined by blood progesterone assays for more than two years. The experiment lasted 33 months in goats and 25 months in ewes. Marked seasonality of ovulatory activity appeared in the temperate group of Creole female goats. The percentage of female goats experiencing at least one ovulation per month dramatically decreased from May to September for the three years (0%, 27% and 0%, respectively). Tropical female goats demonstrated much less seasonality, as the percentage of goats experiencing at least one ovulation per month never went below 56%. These differences were significant. Both groups of temperate and tropical Black-Belly ewes experienced a marked seasonality in their ovulatory activity, with only a slightly significant difference between groups. The percentage of ewes experiencing at least one ovulation per month dropped dramatically in April and rose again in August (tropical ewes) or September (temperate ewes). The percentage of ewes experiencing at least one ovulation per month never went below 8% and 17% (for tropical and temperate ewes respectively) during the spring and summer months. An important seasonality in ovulatory activity of tropical Creole goats was observed when females were exposed to a simulated temperate photoperiod. An unexpected finding was that Black-Belly ewes and, to a lesser extent, Creole goats exposed to a simulated tropical photoperiod also showed seasonality in their ovulatory activity. Such results indicate that both species are capable of showing seasonality under the photoperiodic changes of the temperate zone even though they do not originate from these regions.
Chemineau, Philippe; Daveau, Agnès; Cognié, Yves; Aumont, Gilles; Chesneau, Didier
2004-01-01
Background Seasonality of ovulatory activity is observed in European sheep and goat breeds, whereas tropical breeds show almost continuous ovulatory activity. It is not known if these tropical breeds are sensitive or not to temperate photoperiod. This study was therefore designed to determine whether tropical Creole goats and Black-Belly ewes are sensitive to temperate photoperiod. Two groups of adult females in each species, either progeny or directly born from imported embryos, were used and maintained in light-proof rooms under simulated temperate (8 to 16 h of light per day) or tropical (11 – 13 h) photoperiods. Ovulatory activity was determined by blood progesterone assays for more than two years. The experiment lasted 33 months in goats and 25 months in ewes. Results Marked seasonality of ovulatory activity appeared in the temperate group of Creole female goats. The percentage of female goats experiencing at least one ovulation per month dramatically decreased from May to September for the three years (0%, 27% and 0%, respectively). Tropical female goats demonstrated much less seasonality, as the percentage of goats experiencing at least one ovulation per month never went below 56%. These differences were significant. Both groups of temperate and tropical Black-Belly ewes experienced a marked seasonality in their ovulatory activity, with only a slightly significant difference between groups. The percentage of ewes experiencing at least one ovulation per month dropped dramatically in April and rose again in August (tropical ewes) or September (temperate ewes). The percentage of ewes experiencing at least one ovulation per month never went below 8% and 17% (for tropical and temperate ewes respectively) during the spring and summer months. Conclusions An important seasonality in ovulatory activity of tropical Creole goats was observed when females were exposed to a simulated temperate photoperiod. An unexpected finding was that Black-Belly ewes and, to a lesser extent, Creole goats exposed to a simulated tropical photoperiod also showed seasonality in their ovulatory activity. Such results indicate that both species are capable of showing seasonality under the photoperiodic changes of the temperate zone even though they do not originate from these regions. PMID:15333134
Comparison of satellite reflectance algorithms for estimating ...
We analyzed 10 established and 4 new satellite reflectance algorithms for estimating chlorophyll-a (Chl-a) in a temperate reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense water truth collected within one hour of image acquisition to develop simple proxies for algal blooms and to facilitate portability between multispectral satellite imagers for regional algal bloom monitoring. Narrow band hyperspectral aircraft images were upscaled spectrally and spatially to simulate 5 current and near future satellite imaging systems. Established and new Chl-a algorithms were then applied to the synthetic satellite images and then compared to calibrated Chl-a water truth measurements collected from 44 sites within one hour of aircraft acquisition of the imagery. Masks based on the spatial resolution of the synthetic satellite imagery were then applied to eliminate mixed pixels including vegetated shorelines. Medium-resolution Landsat and finer resolution data were evaluated against 29 coincident water truth sites. Coarse-resolution MODIS and MERIS-like data were evaluated against 9 coincident water truth sites. Each synthetic satellite data set was then evaluated for the performance of a variety of spectrally appropriate algorithms with regard to the estimation of Chl-a concentrations against the water truth data set. The goal is to inform water resource decisions on the appropriate satellite data acquisition and processing for the es
Pan, Albert C; Weinreich, Thomas M; Piana, Stefano; Shaw, David E
2016-03-08
Molecular dynamics (MD) simulations can describe protein motions in atomic detail, but transitions between protein conformational states sometimes take place on time scales that are infeasible or very expensive to reach by direct simulation. Enhanced sampling methods, the aim of which is to increase the sampling efficiency of MD simulations, have thus been extensively employed. The effectiveness of such methods when applied to complex biological systems like proteins, however, has been difficult to establish because even enhanced sampling simulations of such systems do not typically reach time scales at which convergence is extensive enough to reliably quantify sampling efficiency. Here, we obtain sufficiently converged simulations of three proteins to evaluate the performance of simulated tempering, a member of a widely used class of enhanced sampling methods that use elevated temperature to accelerate sampling. Simulated tempering simulations with individual lengths of up to 100 μs were compared to (previously published) conventional MD simulations with individual lengths of up to 1 ms. With two proteins, BPTI and ubiquitin, we evaluated the efficiency of sampling of conformational states near the native state, and for the third, the villin headpiece, we examined the rate of folding and unfolding. Our comparisons demonstrate that simulated tempering can consistently achieve a substantial sampling speedup of an order of magnitude or more relative to conventional MD.
Bayesian inference on EMRI signals using low frequency approximations
NASA Astrophysics Data System (ADS)
Ali, Asad; Christensen, Nelson; Meyer, Renate; Röver, Christian
2012-07-01
Extreme mass ratio inspirals (EMRIs) are thought to be one of the most exciting gravitational wave sources to be detected with LISA. Due to their complicated nature and weak amplitudes the detection and parameter estimation of such sources is a challenging task. In this paper we present a statistical methodology based on Bayesian inference in which the estimation of parameters is carried out by advanced Markov chain Monte Carlo (MCMC) algorithms such as parallel tempering MCMC. We analysed high and medium mass EMRI systems that fall well inside the low frequency range of LISA. In the context of the Mock LISA Data Challenges, our investigation and results are also the first instance in which a fully Markovian algorithm is applied for EMRI searches. Results show that our algorithm worked well in recovering EMRI signals from different (simulated) LISA data sets having single and multiple EMRI sources and holds great promise for posterior computation under more realistic conditions. The search and estimation methods presented in this paper are general in their nature, and can be applied in any other scenario such as AdLIGO, AdVIRGO and Einstein Telescope with their respective response functions.
[Development of APSIM (agricultural production systems simulator) and its application].
Shen, Yuying; Nan, Zhibiao; Bellotti, Bill; Robertson, Michael; Chen, Wen; Shao, Xinqing
2002-08-01
Soil-crop simulator model is an effective tool for providing decision on agricultural management. APSIM (Agricultural Production Systems Simulator) was developed to simulate the biophysical process in farming system, and particularly in the economic and ecological features of the systems under climatic risk. The current literatures revealed that APSIM could be applied in wide zone, including temperate continental, temperate maritime, sub-tropic and arid climate, and Mediterranean climates, with the soil type of clay, duplex soil, vertisol, silt sandy, silt loam and silt clay loam. More than 20 crops have been simulated well. APSIM is powerful on describing crop structure, crop sequence, yield prediction, and quality control as well as erosion estimation under different planting pattern.
Comparison of N2O Emissions from Soils at Three Temperate Agricultural Sites
NASA Technical Reports Server (NTRS)
Frolking, S. E.; Moiser, A. R.; Ojima, D. S.; Li, C.; Parton, W. J.; Potter, C. S.; Priesack, E.; Stenger, R.; Haberbosch, C.; Dorsch, P.;
1997-01-01
Nitrous oxide (N2O) flux simulations by four models were compared with year-round field measurements from five temperate agricultural sites in three countries. The field sites included an unfertilized, semi-arid rangeland with low N2O fluxes in eastern Colorado, USA; two fertilizer treatments (urea and nitrate) on a fertilized grass ley cut for silage in Scotland; and two fertilized, cultivated crop fields in Germany where N2O loss during the winter was quite high. The models used were daily trace gas versions of the CENTURY model, DNDC, ExpertN, and the NASA-Ames version of the CASA model. These models included similar components (soil physics, decomposition, plant growth, and nitrogen transformations), but in some cases used very different algorithms for these processes. All models generated similar results for the general cycling of nitrogen through the agro-ecosystems, but simulated nitrogen trace gas fluxes were quite different. In most cases the simulated N20 fluxes were within a factor of about 2 of the observed annual fluxes, but even when models produced similar N2O fluxes they often produced very different estimates of gaseous N loss as nitric oxide (NO), dinitrogen (N2), and ammonia (NH3). Accurate simulation of soil moisture appears to be a key requirement for reliable simulation of N2O emissions. All models simulated the general pattern of low background fluxes with high fluxes following fertilization at the Scottish sites, but they could not (or were not designed to) accurately capture the observed effects of different fertilizer types on N2O flux. None of the models were able to reliably generate large pulses of N2O during brief winter thaws that were observed at the two German sites. All models except DNDC simulated very low N2O fluxes for the dry site in Colorado. The US Trace Gas Network (TRAGNET) has provided a mechanism for this model and site intercomparison. Additional intercomparisons are needed with these and other models and additional data sets; these should include both tropical agro-ecosystems and new agricultural management techniques designed for sustainability.
Simulation of Temperature Field Distribution for Cutting the Temperated Glass by Ultraviolet Laser
NASA Astrophysics Data System (ADS)
Yang, B. J.; He, Y. C.; Dai, F.; Lin, X. C.
2017-03-01
The finite element software ANSYS was adopted to simulate the temperature field distribution for laser cutting tempered glass, and the influence of different process parameters, including laser power, glass thickness and cutting speed, on temperature field distribution was studied in detail. The results show that the laser power has a greater influence on temperature field distribution than other paremeters, and when the laser power gets to 60W, the highest temperature reaches 749°C, which is higher than the glass softening temperature. It reflects the material near the laser spot is melted and the molten slag is removed by the high-energy water beam quickly. Finally, through the water guided laser cutting tempered glass experiment the FEM theoretical analysis was verified.
NASA Astrophysics Data System (ADS)
Curotto, E.
2015-12-01
Structural optimizations, classical NVT ensemble, and variational Monte Carlo simulations of ion Stockmayer clusters parameterized to approximate the Li+(CH3NO2)n (n = 1-20) systems are performed. The Metropolis algorithm enhanced by the parallel tempering strategy is used to measure internal energies and heat capacities, and a parallel version of the genetic algorithm is employed to obtain the most important minima. The first solvation sheath is octahedral and this feature remains the dominant theme in the structure of clusters with n ≥ 6. The first "magic number" is identified using the adiabatic solvent dissociation energy, and it marks the completion of the second solvation layer for the lithium ion-nitromethane clusters. It corresponds to the n = 18 system, a solvated ion with the first sheath having octahedral symmetry, weakly bound to an eight-membered and a four-membered ring crowning a vertex of the octahedron. Variational Monte Carlo estimates of the adiabatic solvent dissociation energy reveal that quantum effects further enhance the stability of the n = 18 system relative to its neighbors.
Solar radiation-driven inactivation of bacteria, virus and protozoan pathogen models was quantified in simulated drinking water at a temperate latitude (34°S). The water was seeded with Enterococcus faecalis, Clostridium sporogenes spores, and P22 bacteriophage, each at ca 1 x 10...
NASA Astrophysics Data System (ADS)
Lebourgeois, François; Pierrat, Jean-Claude; Perez, Vincent; Piedallu, Christian; Cecchini, Sébastien; Ulrich, Erwin
2010-09-01
After modeling the large-scale climate response patterns of leaf unfolding, leaf coloring and growing season length of evergreen and deciduous French temperate trees, we predicted the effects of eight future climate scenarios on phenological events. We used the ground observations from 103 temperate forests (10 species and 3,708 trees) from the French Renecofor Network and for the period 1997-2006. We applied RandomForest algorithms to predict phenological events from climatic and ecological variables. With the resulting models, we drew maps of phenological events throughout France under present climate and under two climatic change scenarios (A2, B2) and four global circulation models (HadCM3, CGCM2, CSIRO2 and PCM). We compared current observations and predicted values for the periods 2041-2070 and 2071-2100. On average, spring development of oaks precedes that of beech, which precedes that of conifers. Annual cycles in budburst and leaf coloring are highly correlated with January, March-April and October-November weather conditions through temperature, global solar radiation or potential evapotranspiration depending on species. At the end of the twenty-first century, each model predicts earlier budburst (mean: 7 days) and later leaf coloring (mean: 13 days) leading to an average increase in the growing season of about 20 days (for oaks and beech stands). The A2-HadCM3 hypothesis leads to an increase of up to 30 days in many areas. As a consequence of higher predicted warming during autumn than during winter or spring, shifts in leaf coloring dates appear greater than trends in leaf unfolding. At a regional scale, highly differing climatic response patterns were observed.
Lebourgeois, François; Pierrat, Jean-Claude; Perez, Vincent; Piedallu, Christian; Cecchini, Sébastien; Ulrich, Erwin
2010-09-01
After modeling the large-scale climate response patterns of leaf unfolding, leaf coloring and growing season length of evergreen and deciduous French temperate trees, we predicted the effects of eight future climate scenarios on phenological events. We used the ground observations from 103 temperate forests (10 species and 3,708 trees) from the French Renecofor Network and for the period 1997-2006. We applied RandomForest algorithms to predict phenological events from climatic and ecological variables. With the resulting models, we drew maps of phenological events throughout France under present climate and under two climatic change scenarios (A2, B2) and four global circulation models (HadCM3, CGCM2, CSIRO2 and PCM). We compared current observations and predicted values for the periods 2041-2070 and 2071-2100. On average, spring development of oaks precedes that of beech, which precedes that of conifers. Annual cycles in budburst and leaf coloring are highly correlated with January, March-April and October-November weather conditions through temperature, global solar radiation or potential evapotranspiration depending on species. At the end of the twenty-first century, each model predicts earlier budburst (mean: 7 days) and later leaf coloring (mean: 13 days) leading to an average increase in the growing season of about 20 days (for oaks and beech stands). The A2-HadCM3 hypothesis leads to an increase of up to 30 days in many areas. As a consequence of higher predicted warming during autumn than during winter or spring, shifts in leaf coloring dates appear greater than trends in leaf unfolding. At a regional scale, highly differing climatic response patterns were observed.
NASA Astrophysics Data System (ADS)
Hember, R. A.; Kurz, W. A.; Coops, N. C.; Black, T. A.
2010-12-01
Temperate-maritime forests of coastal British Columbia store large amounts of carbon (C) in soil, detritus, and trees. To better understand the sensitivity of these C stocks to climate variability, simulations were conducted using a hybrid version of the model, Physiological Principles Predicting Growth (3-PG), combined with algorithms from the Carbon Budget Model of the Canadian Forest Sector - version 3 (CBM-CFS3) to account for full ecosystem C dynamics. The model was optimized based on a combination of monthly CO2 and H2O flux measurements derived from three eddy-covariance systems and multi-annual stemwood growth (Gsw) and mortality (Msw) derived from 1300 permanent sample plots by means of Markov chain Monte Carlo sampling. The calibrated model serves as an unbiased estimator of stemwood C with enhanced precision over that of strictly-empirical models, minimized reliance on local prescriptions, and the flexibility to study impacts of environmental change on regional C stocks. We report the contribution of each dataset in identifying key physiological parameters and the posterior uncertainty in predictions of net ecosystem production (NEP). The calibrated model was used to spin up pre-industrial C pools and estimate the sensitivity of regional net carbon balance to a gradient of temperature changes, λ=ΔC/ΔT, during three 62-year harvest rotations, spanning 1949-2135. Simulations suggest that regional net primary production, tree mortality, and heterotrophic respiration all began increasing, while NEP began decreasing in response to warming following the 1976 shift in northeast-Pacific climate. We quantified the uncertainty of λ and how it was mediated by initial dead C, tree mortality, precipitation change, and the time horizon in which it was calculated.
Bayesian tomography by interacting Markov chains
NASA Astrophysics Data System (ADS)
Romary, T.
2017-12-01
In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.
Johansen, Richard; Beck, Richard; Nowosad, Jakub; Nietch, Christopher; Xu, Min; Shu, Song; Yang, Bo; Liu, Hongxing; Emery, Erich; Reif, Molly; Harwood, Joseph; Young, Jade; Macke, Dana; Martin, Mark; Stillings, Garrett; Stumpf, Richard; Su, Haibin
2018-06-01
This study evaluated the performances of twenty-nine algorithms that use satellite-based spectral imager data to derive estimates of chlorophyll-a concentrations that, in turn, can be used as an indicator of the general status of algal cell densities and the potential for a harmful algal bloom (HAB). The performance assessment was based on making relative comparisons between two temperate inland lakes: Harsha Lake (7.99 km 2 ) in Southwest Ohio and Taylorsville Lake (11.88 km 2 ) in central Kentucky. Of interest was identifying algorithm-imager combinations that had high correlation with coincident chlorophyll-a surface observations for both lakes, as this suggests portability for regional HAB monitoring. The spectral data utilized to estimate surface water chlorophyll-a concentrations were derived from the airborne Compact Airborne Spectral Imager (CASI) 1500 hyperspectral imager, that was then used to derive synthetic versions of currently operational satellite-based imagers using spatial resampling and spectral binning. The synthetic data mimics the configurations of spectral imagers on current satellites in earth's orbit including, WorldView-2/3, Sentinel-2, Landsat-8, Moderate-resolution Imaging Spectroradiometer (MODIS), and Medium Resolution Imaging Spectrometer (MERIS). High correlations were found between the direct measurement and the imagery-estimated chlorophyll-a concentrations at both lakes. The results determined that eleven out of the twenty-nine algorithms were considered portable, with r 2 values greater than 0.5 for both lakes. Even though the two lakes are different in terms of background water quality, size and shape, with Taylorsville being generally less impaired, larger, but much narrower throughout, the results support the portability of utilizing a suite of certain algorithms across multiple sensors to detect potential algal blooms through the use of chlorophyll-a as a proxy. Furthermore, the strong performance of the Sentinel-2 algorithms is exceptionally promising, due to the recent launch of the second satellite in the constellation, which will provide higher temporal resolution for temperate inland water bodies. Additionally, scripts were written for the open-source statistical software R that automate much of the spectral data processing steps. This allows for the simultaneous consideration of numerous algorithms across multiple imagers over an expedited time frame for the near real-time monitoring required for detecting algal blooms and mitigating their adverse impacts. Copyright © 2018 Elsevier B.V. All rights reserved.
Exploiting molecular dynamics in Nested Sampling simulations of small peptides
NASA Astrophysics Data System (ADS)
Burkoff, Nikolas S.; Baldock, Robert J. N.; Várnai, Csilla; Wild, David L.; Csányi, Gábor
2016-04-01
Nested Sampling (NS) is a parameter space sampling algorithm which can be used for sampling the equilibrium thermodynamics of atomistic systems. NS has previously been used to explore the potential energy surface of a coarse-grained protein model and has significantly outperformed parallel tempering when calculating heat capacity curves of Lennard-Jones clusters. The original NS algorithm uses Monte Carlo (MC) moves; however, a variant, Galilean NS, has recently been introduced which allows NS to be incorporated into a molecular dynamics framework, so NS can be used for systems which lack efficient prescribed MC moves. In this work we demonstrate the applicability of Galilean NS to atomistic systems. We present an implementation of Galilean NS using the Amber molecular dynamics package and demonstrate its viability by sampling alanine dipeptide, both in vacuo and implicit solvent. Unlike previous studies of this system, we present the heat capacity curves of alanine dipeptide, whose calculation provides a stringent test for sampling algorithms. We also compare our results with those calculated using replica exchange molecular dynamics (REMD) and find good agreement. We show the computational effort required for accurate heat capacity estimation for small peptides. We also calculate the alanine dipeptide Ramachandran free energy surface for a range of temperatures and use it to compare the results using the latest Amber force field with previous theoretical and experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curotto, E., E-mail: curotto@arcadia.edu
2015-12-07
Structural optimizations, classical NVT ensemble, and variational Monte Carlo simulations of ion Stockmayer clusters parameterized to approximate the Li{sup +}(CH{sub 3}NO{sub 2}){sub n} (n = 1–20) systems are performed. The Metropolis algorithm enhanced by the parallel tempering strategy is used to measure internal energies and heat capacities, and a parallel version of the genetic algorithm is employed to obtain the most important minima. The first solvation sheath is octahedral and this feature remains the dominant theme in the structure of clusters with n ≥ 6. The first “magic number” is identified using the adiabatic solvent dissociation energy, and it marksmore » the completion of the second solvation layer for the lithium ion-nitromethane clusters. It corresponds to the n = 18 system, a solvated ion with the first sheath having octahedral symmetry, weakly bound to an eight-membered and a four-membered ring crowning a vertex of the octahedron. Variational Monte Carlo estimates of the adiabatic solvent dissociation energy reveal that quantum effects further enhance the stability of the n = 18 system relative to its neighbors.« less
GPU accelerated population annealing algorithm
NASA Astrophysics Data System (ADS)
Barash, Lev Yu.; Weigel, Martin; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.
2017-11-01
Population annealing is a promising recent approach for Monte Carlo simulations in statistical physics, in particular for the simulation of systems with complex free-energy landscapes. It is a hybrid method, combining importance sampling through Markov chains with elements of sequential Monte Carlo in the form of population control. While it appears to provide algorithmic capabilities for the simulation of such systems that are roughly comparable to those of more established approaches such as parallel tempering, it is intrinsically much more suitable for massively parallel computing. Here, we tap into this structural advantage and present a highly optimized implementation of the population annealing algorithm on GPUs that promises speed-ups of several orders of magnitude as compared to a serial implementation on CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising model, it should be easily adapted for simulations of other spin models, including disordered systems. Our code includes implementations of some advanced algorithmic features that have only recently been suggested, namely the automatic adaptation of temperature steps and a multi-histogram analysis of the data at different temperatures. Program Files doi:http://dx.doi.org/10.17632/sgzt4b7b3m.1 Licensing provisions: Creative Commons Attribution license (CC BY 4.0) Programming language: C, CUDA External routines/libraries: NVIDIA CUDA Toolkit 6.5 or newer Nature of problem: The program calculates the internal energy, specific heat, several magnetization moments, entropy and free energy of the 2D Ising model on square lattices of edge length L with periodic boundary conditions as a function of inverse temperature β. Solution method: The code uses population annealing, a hybrid method combining Markov chain updates with population control. The code is implemented for NVIDIA GPUs using the CUDA language and employs advanced techniques such as multi-spin coding, adaptive temperature steps and multi-histogram reweighting. Additional comments: Code repository at https://github.com/LevBarash/PAising. The system size and size of the population of replicas are limited depending on the memory of the GPU device used. For the default parameter values used in the sample programs, L = 64, θ = 100, β0 = 0, βf = 1, Δβ = 0 . 005, R = 20 000, a typical run time on an NVIDIA Tesla K80 GPU is 151 seconds for the single spin coded (SSC) and 17 seconds for the multi-spin coded (MSC) program (see Section 2 for a description of these parameters).
NASA Astrophysics Data System (ADS)
Yu, Hao; Zhou, Tao
The heat treatment during manufacturing process of induction bend pipe had been simulated. The evolutions of ferrite, M/A island and substructure after tempering at 500 700 °C were characterized by means of optical microscopy, positron annihilation technique, SEM, TEM, XRD and EBSD. The mechanical performance was evaluated by tensile test, Charpy V-notch impact test (-20 °C) and Vickers hardness test (10 kgf). Microstructure observations showed that fine and homogenous M/A islands as well as dislocation packages in quasi-polygonal ferrite matrix after tempering at 600 650 °C generated optimal combination of strength and toughness. After tempering at 700 °C, the yield strength decreased dramatically. EBSD analysis indicated that the effective grain size diminished with the tempering temperature increasing. It could cause more energy cost during microcrack propagation process with subsequent improvement in impact toughness. Dislocation analysis suggested that the decrease and pile-up of dislocation benefited the combination of strength and toughness.
Sun, Rui; Dama, James F; Tan, Jeffrey S; Rose, John P; Voth, Gregory A
2016-10-11
Metadynamics is an important enhanced sampling technique in molecular dynamics simulation to efficiently explore potential energy surfaces. The recently developed transition-tempered metadynamics (TTMetaD) has been proven to converge asymptotically without sacrificing exploration of the collective variable space in the early stages of simulations, unlike other convergent metadynamics (MetaD) methods. We have applied TTMetaD to study the permeation of drug-like molecules through a lipid bilayer to further investigate the usefulness of this method as applied to problems of relevance to medicinal chemistry. First, ethanol permeation through a lipid bilayer was studied to compare TTMetaD with nontempered metadynamics and well-tempered metadynamics. The bias energies computed from various metadynamics simulations were compared to the potential of mean force calculated from umbrella sampling. Though all of the MetaD simulations agree with one another asymptotically, TTMetaD is able to predict the most accurate and reliable estimate of the potential of mean force for permeation in the early stages of the simulations and is robust to the choice of required additional parameters. We also show that using multiple randomly initialized replicas allows convergence analysis and also provides an efficient means to converge the simulations in shorter wall times and, more unexpectedly, in shorter CPU times; splitting the CPU time between multiple replicas appears to lead to less overall error. After validating the method, we studied the permeation of a more complicated drug-like molecule, trimethoprim. Three sets of TTMetaD simulations with different choices of collective variables were carried out, and all converged within feasible simulation time. The minimum free energy paths showed that TTMetaD was able to predict almost identical permeation mechanisms in each case despite significantly different definitions of collective variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Zhiyang; Zhang, Xiong
A dynamic computer simulation is carried out in the climates of 35 cities distributed around the world. The variation of the annual air-conditioning energy loads due to changes in the longwave emissivity and the solar reflectance of the building envelopes is studied to find the most appropriate exterior building finishes in various climates (including a tropical climate, a subtropical climate, a mountain plateau climate, a frigid-temperate climate and a temperate climate). Both the longwave emissivity and the solar reflectance are set from 0.1 to 0.9 with an interval of 0.1 in the simulation. The annual air-conditioning energy loads trends ofmore » each city are listed in a chart. The results show that both the longwave emissivity and the solar reflectance of building envelopes play significant roles in energy-saving for buildings. In tropical climates, the optical parameters of the building exterior surface affect the building energy-saving most significantly. In the mountain plateau climates and the subarctic climates, the impacts on energy-saving in buildings due to changes in the longwave emissivity and the solar reflectance are still considerable, but in the temperate continental climates and the temperate maritime climates, only limited effects are seen. (author)« less
Simulation of carbon isotope discrimination of the terrestrial biosphere
NASA Astrophysics Data System (ADS)
Suits, N. S.; Denning, A. S.; Berry, J. A.; Still, C. J.; Kaduk, J.; Miller, J. B.; Baker, I. T.
2005-03-01
We introduce a multistage model of carbon isotope discrimination during C3 photosynthesis and global maps of C3/C4 plant ratios to an ecophysiological model of the terrestrial biosphere (SiB2) in order to predict the carbon isotope ratios of terrestrial plant carbon globally at a 1° resolution. The model is driven by observed meteorology from the European Centre for Medium-Range Weather Forecasts (ECMWF), constrained by satellite-derived Normalized Difference Vegetation Index (NDVI) and run for the years 1983-1993. Modeled mean annual C3 discrimination during this period is 19.2‰; total mean annual discrimination by the terrestrial biosphere (C3 and C4 plants) is 15.9‰. We test simulation results in three ways. First, we compare the modeled response of C3 discrimination to changes in physiological stress, including daily variations in vapor pressure deficit (vpd) and monthly variations in precipitation, to observed changes in discrimination inferred from Keeling plot intercepts. Second, we compare mean δ13C ratios from selected biomes (Broadleaf, Temperate Broadleaf, Temperate Conifer, and Boreal) to the observed values from Keeling plots at these biomes. Third, we compare simulated zonal δ13C ratios in the Northern Hemisphere (20°N to 60°N) to values predicted from high-frequency variations in measured atmospheric CO2 and δ13C from terrestrially dominated sites within the NOAA-Globalview flask network. The modeled response to changes in vapor pressure deficit compares favorably to observations. Simulated discrimination in tropical forests of the Amazon basin is less sensitive to changes in monthly precipitation than is suggested by some observations. Mean model δ13C ratios for Broadleaf, Temperate Broadleaf, Temperate Conifer, and Boreal biomes compare well with the few measurements available; however, there is more variability in observations than in the simulation, and modeled δ13C values for tropical forests are heavy relative to observations. Simulated zonal δ13C ratios in the Northern Hemisphere capture patterns of zonal δ13C inferred from atmospheric measurements better than previous investigations. Finally, there is still a need for additional constraints to verify that carbon isotope models behave as expected.
Effects of Polymer Conjugation on Hybridization Thermodynamics of Oligonucleic Acids.
Ghobadi, Ahmadreza F; Jayaraman, Arthi
2016-09-15
In this work, we perform coarse-grained (CG) and atomistic simulations to study the effects of polymer conjugation on hybridization/melting thermodynamics of oligonucleic acids (ONAs). We present coarse-grained Langevin molecular dynamics simulations (CG-NVT) to assess the effects of the polymer flexibility, length, and architecture on hybridization/melting of ONAs with different ONA duplex sequences, backbone chemistry, and duplex concentration. In these CG-NVT simulations, we use our recently developed CG model of ONAs in implicit solvent, and treat the conjugated polymer as a CG chain with purely repulsive Weeks-Chandler-Andersen interactions with all other species in the system. We find that 8-100-mer linear polymer conjugation destabilizes 8-mer ONA duplexes with weaker Watson-Crick hydrogen bonding (WC H-bonding) interactions at low duplex concentrations, while the same polymer conjugation has an insignificant impact on 8-mer ONA duplexes with stronger WC H-bonding. To ensure the configurational space is sampled properly in the CG-NVT simulations, we also perform CG well-tempered metadynamics simulations (CG-NVT-MetaD) and analyze the free energy landscape of ONA hybridization for a select few systems. We demonstrate that CG-NVT-MetaD simulation results are consistent with the CG-NVT simulations for the studied systems. To examine the limitations of coarse-graining in capturing ONA-polymer interactions, we perform atomistic parallel tempering metadynamics simulations at well-tempered ensemble (AA-MetaD) for a 4-mer DNA in explicit water with and without conjugation to 8-mer poly(ethylene glycol) (PEG). AA-MetaD simulations also show that, for a short DNA duplex at T = 300 K, a condition where the DNA duplex is unstable, conjugation with PEG further destabilizes DNA duplex. We conclude with a comparison of results from these three different types of simulations and discuss their limitations and strengths.
Resonant behavior of the generalized Langevin system with tempered Mittag–Leffler memory kernel
NASA Astrophysics Data System (ADS)
Chen, Yao; Wang, Xudong; Deng, Weihua
2018-05-01
The generalized Langevin equation describes anomalous dynamics. Noise is not only the origin of uncertainty but also plays a positive role in helping to detect signals with information, termed stochastic resonance (SR). This paper analyzes the anomalous resonant behaviors of the generalized Langevin system with a multiplicative dichotomous noise and an internal tempered Mittag–Leffler noise. For a system with a fluctuating harmonic potential, we obtain the exact expressions of several types of SR such as the first moment, the amplitude and autocorrelation function for the output signal as well as the signal–noise ratio. We analyze the influence of the tempering parameter and memory exponent on the bona fide SR and the general SR. Moreover, it is detected that the critical memory exponent changes regularly with the increase of the tempering parameter. Almost all the theoretical results are validated by numerical simulations.
TemperSAT: A new efficient fair-sampling random k-SAT solver
NASA Astrophysics Data System (ADS)
Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.
The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.
Jin, Dongliang; Coasne, Benoit
2017-10-24
Different molecular simulation strategies are used to assess the stability of methane hydrate under various temperature and pressure conditions. First, using two water molecular models, free energy calculations consisting of the Einstein molecule approach in combination with semigrand Monte Carlo simulations are used to determine the pressure-temperature phase diagram of methane hydrate. With these calculations, we also estimate the chemical potentials of water and methane and methane occupancy at coexistence. Second, we also consider two other advanced molecular simulation techniques that allow probing the phase diagram of methane hydrate: the direct coexistence method in the Grand Canonical ensemble and the hyperparallel tempering Monte Carlo method. These two direct techniques are found to provide stability conditions that are consistent with the pressure-temperature phase diagram obtained using rigorous free energy calculations. The phase diagram obtained in this work, which is found to be consistent with previous simulation studies, is close to its experimental counterpart provided the TIP4P/Ice model is used to describe the water molecule.
SIMULATED CLIMATE CHANGE EFFECTS ON DISSOLVED OXYGEN CHARACTERISTICS IN ICE-COVERED LAKES. (R824801)
A deterministic, one-dimensional model is presented which simulates daily dissolved oxygen (DO) profiles and associated water temperatures, ice covers and snow covers for dimictic and polymictic lakes of the temperate zone. The lake parameters required as model input are surface ...
Modeling methane emissions by cattle production systems in Mexico
NASA Astrophysics Data System (ADS)
Castelan-Ortega, O. A.; Ku Vera, J.; Molina, L. T.
2013-12-01
Methane emissions from livestock is one of the largest sources of methane in Mexico. The purpose of the present paper is to provide a realistic estimate of the national inventory of methane produced by the enteric fermentation of cattle, based on an integrated simulation model, and to provide estimates of CH4 produced by cattle fed typical diets from the tropical and temperate climates of Mexico. The Mexican cattle population of 23.3 million heads was divided in two groups. The first group (7.8 million heads), represents cattle of the tropical climate regions. The second group (15.5 million heads), are the cattle in the temperate climate regions. This approach allows incorporating the effect of diet on CH4 production into the analysis because the quality of forages is lower in the tropics than in temperate regions. Cattle population in every group was subdivided into two categories: cows (COW) and other type of cattle (OTHE), which included calves, heifers, steers and bulls. The daily CH4 production by each category of animal along an average production cycle of 365 days was simulated, instead of using a default emission factor as in Tier 1 approach. Daily milk yield, live weight changes associated with the lactation, and dry matter intake, were simulated for the entire production cycle. The Moe and Tyrrell (1979) model was used to simulate CH4 production for the COW category, the linear model of Mills et al. (2003) for the OTHE category in temperate regions and the Kurihara et al. (1999) model for the OTHE category in the tropical regions as it has been developed for cattle fed tropical diets. All models were integrated with a cow submodel to form an Integrated Simulation Model (ISM). The AFRC (1993) equations and the lactation curve model of Morant and Gnanasakthy (1989) were used to construct the cow submodel. The ISM simulates on a daily basis the CH4 production, milk yield, live weight changes associated with lactation and dry matter intake. The total daily CH4 emission per region was calculated by multiplying the number of heads of cattle in each region by their corresponding simulated emission factor, either COW or OTHE, as predicted by the ISM. The total CH4 emissions from the Mexican cattle population was then calculated by adding up the daily emissions from each region. The predicted total emission of methane produced by the 23.3 million heads of cattle in Mexico is approximately 2.02 Tg/year, from which 1.28 Tg is produced by cattle in temperate regions and the rest by cattle in the tropics. It was concluded that the modeling approach was suitable in producing a better estimate of the national methane inventory for cattle. It is flexible enough to incorporate more cattle groups or classification schemes and productivity levels.
2015-01-01
The lateral heterogeneity of cellular membranes plays an important role in many biological functions such as signaling and regulating membrane proteins. This heterogeneity can result from preferential interactions between membrane components or interactions with membrane proteins. One major difficulty in molecular dynamics simulations aimed at studying the membrane heterogeneity is that lipids diffuse slowly and collectively in bilayers, and therefore, it is difficult to reach equilibrium in lateral organization in bilayer mixtures. Here, we propose the use of the replica exchange with solute tempering (REST) approach to accelerate lateral relaxation in heterogeneous bilayers. REST is based on the replica exchange method but tempers only the solute, leaving the temperature of the solvent fixed. Since the number of replicas in REST scales approximately only with the degrees of freedom in the solute, REST enables us to enhance the configuration sampling of lipid bilayers with fewer replicas, in comparison with the temperature replica exchange molecular dynamics simulation (T-REMD) where the number of replicas scales with the degrees of freedom of the entire system. We apply the REST method to a cholesterol and 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) bilayer mixture and find that the lateral distribution functions of all molecular pair types converge much faster than in the standard MD simulation. The relative diffusion rate between molecules in REST is, on average, an order of magnitude faster than in the standard MD simulation. Although REST was initially proposed to study protein folding and its efficiency in protein folding is still under debate, we find a unique application of REST to accelerate lateral equilibration in mixed lipid membranes and suggest a promising way to probe membrane lateral heterogeneity through molecular dynamics simulation. PMID:25328493
Huang, Kun; García, Angel E
2014-10-14
The lateral heterogeneity of cellular membranes plays an important role in many biological functions such as signaling and regulating membrane proteins. This heterogeneity can result from preferential interactions between membrane components or interactions with membrane proteins. One major difficulty in molecular dynamics simulations aimed at studying the membrane heterogeneity is that lipids diffuse slowly and collectively in bilayers, and therefore, it is difficult to reach equilibrium in lateral organization in bilayer mixtures. Here, we propose the use of the replica exchange with solute tempering (REST) approach to accelerate lateral relaxation in heterogeneous bilayers. REST is based on the replica exchange method but tempers only the solute, leaving the temperature of the solvent fixed. Since the number of replicas in REST scales approximately only with the degrees of freedom in the solute, REST enables us to enhance the configuration sampling of lipid bilayers with fewer replicas, in comparison with the temperature replica exchange molecular dynamics simulation (T-REMD) where the number of replicas scales with the degrees of freedom of the entire system. We apply the REST method to a cholesterol and 1,2-dipalmitoyl- sn -glycero-3-phosphocholine (DPPC) bilayer mixture and find that the lateral distribution functions of all molecular pair types converge much faster than in the standard MD simulation. The relative diffusion rate between molecules in REST is, on average, an order of magnitude faster than in the standard MD simulation. Although REST was initially proposed to study protein folding and its efficiency in protein folding is still under debate, we find a unique application of REST to accelerate lateral equilibration in mixed lipid membranes and suggest a promising way to probe membrane lateral heterogeneity through molecular dynamics simulation.
Longhi, Giovanna; Fornili, Sandro L; Turco Liveri, Vincenzo
2015-07-07
Experimental investigations using mass spectrometry have established that surfactant molecules are able to form aggregates in the gas phase. However, there is no general consensus on the organization of these aggregates and how it depends on the aggregation number and surfactant molecular structure. In the present paper we investigate the structural organization of some surfactants in vacuo by molecular dynamics and well-tempered metadynamics simulations to widely explore the space of their possible conformations in vacuo. To study how the specific molecular features of such compounds affect their organization, we have considered as paradigmatic surfactants, the anionic single-chain sodium dodecyl sulfate (SDS), the anionic double-chain sodium bis(2-ethylhexyl) sulfosuccinate (AOT) and the zwitterionic single-chain dodecyl phosphatidyl choline (DPC) within a wide aggregation number range (from 5 to 100). We observe that for low aggregation numbers the aggregates show in vacuo the typical structure of reverse micelles, while for large aggregation numbers a variety of globular aggregates occur that are characterized by the coexistence of interlaced domains formed by the polar or ionic heads and by the alkyl chains of the surfactants. Well-tempered metadynamics simulations allows us to confirm that the structural organizations obtained after 50 ns of molecular dynamics simulations are practically the equilibrium ones. Similarities and differences of surfactant aggregates in vacuo and in apolar media are also discussed.
Essential slow degrees of freedom in protein-surface simulations: A metadynamics investigation.
Prakash, Arushi; Sprenger, K G; Pfaendtner, Jim
2018-03-29
Many proteins exhibit strong binding affinities to surfaces, with binding energies much greater than thermal fluctuations. When modelling these protein-surface systems with classical molecular dynamics (MD) simulations, the large forces that exist at the protein/surface interface generally confine the system to a single free energy minimum. Exploring the full conformational space of the protein, especially finding other stable structures, becomes prohibitively expensive. Coupling MD simulations with metadynamics (enhanced sampling) has fast become a common method for sampling the adsorption of such proteins. In this paper, we compare three different flavors of metadynamics, specifically well-tempered, parallel-bias, and parallel-tempering in the well-tempered ensemble, to exhaustively sample the conformational surface-binding landscape of model peptide GGKGG. We investigate the effect of mobile ions and ion charge, as well as the choice of collective variable (CV), on the binding free energy of the peptide. We make the case for explicitly biasing ions to sample the true binding free energy of biomolecules when the ion concentration is high and the binding free energies of the solute and ions are similar. We also make the case for choosing CVs that apply bias to all atoms of the solute to speed up calculations and obtain the maximum possible amount of information about the system. Copyright © 2017 Elsevier Inc. All rights reserved.
Satellite-based estimation of evapotranspiration in typical forests of China
NASA Astrophysics Data System (ADS)
Wang, Y.; Li, R.
2017-12-01
Evapotranspiration (ET) is the key process affecting the interaction between land surface and atmosphere. Satellite remote sensing is the only feasible technique to monitor the terrestrial ET on large scale. Microwave Emissivity Difference Vegetation Index (EDVI) indicates vegetation water content and can be retrieved under both clear and cloudy sky. Based on EDVI, a quantitative algorithm for ET estimation in China was developed. In this study, we improved the EDVI-based ET algorithm by using the datasets from multiple platforms, including Moderate-Resolution Imaging Spectroradiometer (MODIS), Clouds and Earth's Radiation energy system (CERES) and European Centre for Medium-Range Weather Forecasts (ECMWF). As primary inputs of the algorithm, they are all independent from ground-based measurements. The improved algorithm was tested in three ChinaFlux forest sites, Dinghushan(DHS) subtropical evergreen broad-leaved forest site, Qianyanzhou(QYZ) subtropical man-planted forest site and Changbaishan(CBS) temperate deciduous broad-leaved coniferous mixed forest site. Validations against the in-situ measured ETobs from 2003 to 2005 showed that the EDVI-based algorithm has the capability to simulate midday ET within reasonable accuracies. In terms of the magnitude and seasonal cycle, the estimated ETcal are in very good agreement with the ETobs. The correlation coefficients(R) between ETcal and ETobs during midday vary from 0.51 to 0.80 over the study years, with the annual mean bias (relative bias) ranging from -53.02 Wm-2 (-26.46%) to 34.02 Wm-2 (+23.69%). At monthly scale, the R of monthly mean ETcal and ETobs can reach to 0.83, 0.93 and 0.82 at DHS, QYZ and CBS, with bias of +3.0%, -22.3% and -9.7%, respectively. Contamination from precipitation can partly affect the performances of this algorithm. Validation results generally become better after removing those samples in rainy days. The results indicate that this EDVI-based algorithm, driven completely by using satellite and reanalysis datasets, has a great potential for monitoring terrestrial ET in large spatial scale and under both clear and cloudy sky.
Borukhovich, Efim; Du, Guanxing; Stratmann, Matthias; Boeff, Martin; Shchyglo, Oleg; Hartmaier, Alexander; Steinbach, Ingo
2016-01-01
Martensitic steels form a material class with a versatile range of properties that can be selected by varying the processing chain. In order to study and design the desired processing with the minimal experimental effort, modeling tools are required. In this work, a full processing cycle from quenching over tempering to mechanical testing is simulated with a single modeling framework that combines the features of the phase-field method and a coupled chemo-mechanical approach. In order to perform the mechanical testing, the mechanical part is extended to the large deformations case and coupled to crystal plasticity and a linear damage model. The quenching process is governed by the austenite-martensite transformation. In the tempering step, carbon segregation to the grain boundaries and the resulting cementite formation occur. During mechanical testing, the obtained material sample undergoes a large deformation that leads to local failure. The initial formation of the damage zones is observed to happen next to the carbides, while the final damage morphology follows the martensite microstructure. This multi-scale approach can be applied to design optimal microstructures dependent on processing and materials composition. PMID:28773791
Step tracking program for concentrator solar collectors
NASA Astrophysics Data System (ADS)
Ciobanu, D.; Jaliu, C.
2016-08-01
The increasing living standards in developed countries lead to increased energy consumption. The fossil fuel consumption and greenhouse gas effect that accompany the energy production can be reduced by using renewable energy. For instance, the solar thermal systems can be used in temperate climates to provide heating during the transient period or cooling during the warmer months. Most used solar thermal systems contain flat plate solar collectors. In order to provide the necessary energy for the house cooling system, the cooling machine uses a working fluid with a high temperature, which can be supplied by dish concentrator collectors. These collectors are continuously rotated towards sun by biaxial tracking systems, process that increases the consumed power. An algorithm for a step tracking program to be used in the orientation of parabolic dish concentrator collectors is proposed in the paper to reduce the consumed power due to actuation. The algorithm is exemplified on a case study: a dish concentrator collector to be implemented in Brasov, Romania, a location with the turbidity factor TR equal to 3. The size of the system is imposed by the environment, the diameter of the dish reflector being of 3 meters. By applying the proposed algorithm, 60 sub-programs are obtained for the step orientation of the parabolic dish collector over the year. Based on the results of the numerical simulations for the step orientation, the efficiency of the direct solar radiation capture on the receptor is up to 99%, while the energy consumption is reduced by almost 80% compared to the continuous actuation of the concentrator solar collector.
Conservation biogeography of red oaks (Quercus, section Lobatae) in Mexico and Central America.
Torres-Miranda, Andrés; Luna-Vega, Isolda; Oyama, Ken
2011-02-01
Oaks are dominant trees and key species in many temperate and subtropical forests in the world. In this study, we analyzed patterns of distribution of red oaks (Quercus, section Lobatae) occurring in Mexico and Central America to determine areas of species richness and endemism to propose areas of conservation. Patterns of richness and endemism of 75 red oak species were analyzed using three different units. Two complementarity algorithms based on species richness and three algorithms based on species rarity were used to identify important areas for conservation. A simulated annealing analysis was performed to evaluate and formulate effective new reserves for red oaks that are useful for conserving the ecosystems associated with them after the systematic conservation planning approach. Two main centers of species richness were detected. The northern Sierra Madre Oriental and Serranías Meridionales of Jalisco had the highest values of endemism. Fourteen areas were considered as priorities for conservation of red oak species based on the 26 priority political entities, 11 floristic units and the priority grid-cells obtained in the complementarity analysis. In the present network of Natural Protected Areas in Mexico and Central America, only 41.3% (31 species) of the red oak species are protected. The simulated annealing analysis indicated that to protect all 75 species of red oaks, 12 current natural protected areas need to be expanded by 120000 ha of additional land, and 26 new natural protected areas with 512500 ha need to be created. Red oaks are a useful model to identify areas for conservation based on species richness and endemism as a result of their wide geographic distribution and a high number of species. We evaluated and reformulated new reserves for red oaks that are also useful for the conservation of ecosystems associated with them.
D. Bachelet; J. Lenihan; R. Neilson; R. Drapek; T. Kittel
2005-01-01
The dynamic global vegetation model MC1 was used to examine climate, fire, and ecosystems interactions in Alaska under historical (1922-1996) and future (1997-2100) climate conditions. Projections show that by the end of the 21st century, 75%-90% of the area simulated as tundra in 1922 is replaced by boreal and temperate forest. From 1922 to 1996, simulation results...
Veselý, Lukáš; Buřič, Miloš; Kouba, Antonín
2015-01-01
The spreading of new crayfish species poses a serious risk for freshwater ecosystems; because they are omnivores they influence more than one level in the trophic chain and they represent a significant part of the benthic biomass. Both the environmental change through global warming and the expansion of the pet trade increase the possibilities of their spreading. We investigated the potential of four “warm water” highly invasive crayfish species to overwinter in the temperate zone, so as to predict whether these species pose a risk for European freshwaters. We used 15 specimens of each of the following species: the red swamp crayfish (Procambarus clarkii), the marbled crayfish (Procambarus fallax f. virginalis), the yabby (Cherax destructor), and the redclaw (Cherax quadricarinatus). Specimens were acclimatized and kept for 6.5 months at temperatures simulating the winter temperature regime of European temperate zone lentic ecosystems. We conclude that the red swamp crayfish, marbled crayfish and yabby have the ability to withstand low winter temperatures relevant for lentic habitats in the European temperate zone, making them a serious invasive threat to freshwater ecosystems. PMID:26572317
Realizing Mitigation Efficiency of European Commercial Forests by Climate Smart Forestry.
Yousefpour, Rasoul; Augustynczik, Andrey Lessa Derci; Reyer, Christopher P O; Lasch-Born, Petra; Suckow, Felicitas; Hanewinkel, Marc
2018-01-10
European temperate and boreal forests sequester up to 12% of Europe's annual carbon emissions. Forest carbon density can be manipulated through management to maximize its climate mitigation potential, and fast-growing tree species may contribute the most to Climate Smart Forestry (CSF) compared to slow-growing hardwoods. This type of CSF takes into account not only forest resource potentials in sequestering carbon, but also the economic impact of regional forest products and discounts both variables over time. We used the process-based forest model 4 C to simulate European commercial forests' growth conditions and coupled it with an optimization algorithm to simulate the implementation of CSF for 18 European countries encompassing 68.3 million ha of forest (42.4% of total EU-28 forest area). We found a European CSF policy that could sequester 7.3-11.1 billion tons of carbon, projected to be worth 103 to 141 billion euros in the 21st century. An efficient CSF policy would allocate carbon sequestration to European countries with a lower wood price, lower labor costs, high harvest costs, or a mixture thereof to increase its economic efficiency. This policy prioritized the allocation of mitigation efforts to northern, eastern and central European countries and favored fast growing conifers Picea abies and Pinus sylvestris to broadleaves Fagus sylvatica and Quercus species.
LibKiSAO: a Java library for Querying KiSAO.
Zhukova, Anna; Adams, Richard; Laibe, Camille; Le Novère, Nicolas
2012-09-24
The Kinetic Simulation Algorithm Ontology (KiSAO) supplies information about existing algorithms available for the simulation of Systems Biology models, their characteristics, parameters and inter-relationships. KiSAO enables the unambiguous identification of algorithms from simulation descriptions. Information about analogous methods having similar characteristics and about algorithm parameters incorporated into KiSAO is desirable for simulation tools. To retrieve this information programmatically an application programming interface (API) for KiSAO is needed. We developed libKiSAO, a Java library to enable querying of the KiSA Ontology. It implements methods to retrieve information about simulation algorithms stored in KiSAO, their characteristics and parameters, and methods to query the algorithm hierarchy and search for similar algorithms providing comparable results for the same simulation set-up. Using libKiSAO, simulation tools can make logical inferences based on this knowledge and choose the most appropriate algorithm to perform a simulation. LibKiSAO also enables simulation tools to handle a wider range of simulation descriptions by determining which of the available methods are similar and can be used instead of the one indicated in the simulation description if that one is not implemented. LibKiSAO enables Java applications to easily access information about simulation algorithms, their characteristics and parameters stored in the OWL-encoded Kinetic Simulation Algorithm Ontology. LibKiSAO can be used by simulation description editors and simulation tools to improve reproducibility of computational simulation tasks and facilitate model re-use.
Using sketch-map coordinates to analyze and bias molecular dynamics simulations
Tribello, Gareth A.; Ceriotti, Michele; Parrinello, Michele
2012-01-01
When examining complex problems, such as the folding of proteins, coarse grained descriptions of the system drive our investigation and help us to rationalize the results. Oftentimes collective variables (CVs), derived through some chemical intuition about the process of interest, serve this purpose. Because finding these CVs is the most difficult part of any investigation, we recently developed a dimensionality reduction algorithm, sketch-map, that can be used to build a low-dimensional map of a phase space of high-dimensionality. In this paper we discuss how these machine-generated CVs can be used to accelerate the exploration of phase space and to reconstruct free-energy landscapes. To do so, we develop a formalism in which high-dimensional configurations are no longer represented by low-dimensional position vectors. Instead, for each configuration we calculate a probability distribution, which has a domain that encompasses the entirety of the low-dimensional space. To construct a biasing potential, we exploit an analogy with metadynamics and use the trajectory to adaptively construct a repulsive, history-dependent bias from the distributions that correspond to the previously visited configurations. This potential forces the system to explore more of phase space by making it desirable to adopt configurations whose distributions do not overlap with the bias. We apply this algorithm to a small model protein and succeed in reproducing the free-energy surface that we obtain from a parallel tempering calculation. PMID:22427357
Haldar, Susanta; Kührová, Petra; Banáš, Pavel; Spiwok, Vojtěch; Šponer, Jiří; Hobza, Pavel; Otyepka, Michal
2015-08-11
RNA hairpins capped by 5'-GNRA-3' or 5'-UNCG-3' tetraloops (TLs) are prominent RNA structural motifs. Despite their small size, a wealth of experimental data, and recent progress in theoretical simulations of their structural dynamics and folding, our understanding of the folding and unfolding processes of these small RNA elements is still limited. Theoretical description of the folding and unfolding processes requires robust sampling, which can be achieved by either an exhaustive time scale in standard molecular dynamics simulations or sophisticated enhanced sampling methods, using temperature acceleration or biasing potentials. Here, we study structural dynamics of 5'-GNRA-3' and 5'-UNCG-3' TLs by 15-μs-long standard simulations and a series of well-tempered metadynamics, attempting to accelerate sampling by bias in a few chosen collective variables (CVs). Both methods provide useful insights. The unfolding and refolding mechanisms of the GNRA TL observed by well-tempered metadynamics agree with the (reverse) folding mechanism suggested by recent replica exchange molecular dynamics simulations. The orientation of the glycosidic bond of the GL4 nucleobase is critical for the UUCG TL folding pathway, and our data strongly support the hypothesis that GL4-anti forms a kinetic trap along the folding pathway. Along with giving useful insight, our study also demonstrates that using only a few CVs apparently does not capture the full folding landscape of the RNA TLs. Despite using several sophisticated selections of the CVs, formation of the loop appears to remain a hidden variable, preventing a full convergence of the metadynamics. Finally, our data suggest that the unfolded state might be overstabilized by the force fields used.
Simulating the onset of spring vegetation growth across the Northern Hemisphere.
Liu, Qiang; Fu, Yongshuo H; Liu, Yongwen; Janssens, Ivan A; Piao, Shilong
2018-03-01
Changes in the spring onset of vegetation growth in response to climate change can profoundly impact climate-biosphere interactions. Thus, robust simulation of spring onset is essential to accurately predict ecosystem responses and feedback to ongoing climate change. To date, the ability of vegetation phenology models to reproduce spatiotemporal patterns of spring onset at larger scales has not been thoroughly investigated. In this study, we took advantage of phenology observations via remote sensing to calibrate and evaluated six models, including both one-phase (considering only forcing temperatures) and two-phase (involving forcing, chilling, and photoperiod) models across the Northern Hemisphere between 1982 and 2012. Overall, we found that the model that integrated the photoperiod effect performed best at capturing spatiotemporal patterns of spring phenology in boreal and temperate forests. By contrast, all of the models performed poorly in simulating the onset of growth in grasslands. These results suggest that the photoperiod plays a role in controlling the onset of growth in most Northern Hemisphere forests, whereas other environmental factors (e.g., precipitation) should be considered when simulating the onset of growth in grasslands. We also found that the one-phase model performed as well as the two-phase models in boreal forests, which implies that the chilling requirement is probably fulfilled across most of the boreal zone. Conversely, two-phase models performed better in temperate forests than the one-phase model, suggesting that photoperiod and chilling play important roles in these temperate forests. Our results highlight the significance of including chilling and photoperiod effects in models of the spring onset of forest growth at large scales, and indicate that the consideration of additional drivers may be required for grasslands. © 2017 John Wiley & Sons Ltd.
Wood phenology: from organ-scale processes to terrestrial ecosystem models
NASA Astrophysics Data System (ADS)
Delpierre, Nicolas; Guillemot, Joannès
2016-04-01
In temperate and boreal trees, a dormancy period prevents organ development during adverse climatic conditions. Whereas the phenology of leaves and flowers has received considerable attention, to date, little is known regarding the phenology of other tree organs such as wood, fine roots, fruits and reserve compounds. In this presentation, we review both the role of environmental drivers in determining the phenology of wood and the models used to predict its phenology in temperate and boreal forest trees. Temperature is a key driver of the resumption of wood activity in spring. There is no such clear dominant environmental cue involved in the cessation of wood formation in autumn, but temperature and water stress appear as prominent factors. We show that wood phenology is a key driver of the interannual variability of wood growth in temperate tree species. Incorporating representations of wood phenology in a terrestrial ecosystem model substantially improved the simulation of wood growth under current climate.
Migration behaviour of silicone moulds in contact with different foodstuffs.
Helling, Ruediger; Kutschbach, Katja; Joachim Simat, Thomas
2010-03-01
Various foodstuffs were prepared in silicone baking moulds and analyzed for siloxane migration using a previously developed and validated (1)H-NMR method. Meat loaf significantly exceeded the overall migration limit of 60 mg kg(-1) (10 mg sdm(-1)) in the first and third experiment. The highest siloxane migration found in a meat loaf after preparation in a commercial mould was 177 mg kg(-1). In contrast, milk-based food showed very low or non-detectable migration (<2.4 mg kg(-1)), even containing high fat levels. Similar results were achieved using 50% ethanol as the simulant for milk-based products, as defined in the Plastics Directive 2007/19/EEC. After solvent extraction of the moulds in simulating long-term usage, no further migration into the food was detectable, indicating that there is no significant formation of low molecular weight, potentially migrating siloxanes from the elastomer. During repeated usage, the moulds showed a high uptake of fat: up to 8.0 g fat per kg elastomer. Proper tempering of the moulds had a major influence on the migration properties of siloxanes into different foodstuffs. Non-tempered moulds with a high level of volatile organic compounds (1.1%) were shown to have considerably higher migration than the equivalent tempered moulds.
Model-data fusion across ecosystems: from multisite optimizations to global simulations
NASA Astrophysics Data System (ADS)
Kuppel, S.; Peylin, P.; Maignan, F.; Chevallier, F.; Kiely, G.; Montagnani, L.; Cescatti, A.
2014-11-01
This study uses a variational data assimilation framework to simultaneously constrain a global ecosystem model with eddy covariance measurements of daily net ecosystem exchange (NEE) and latent heat (LE) fluxes from a large number of sites grouped in seven plant functional types (PFTs). It is an attempt to bridge the gap between the numerous site-specific parameter optimization works found in the literature and the generic parameterization used by most land surface models within each PFT. The present multisite approach allows deriving PFT-generic sets of optimized parameters enhancing the agreement between measured and simulated fluxes at most of the sites considered, with performances often comparable to those of the corresponding site-specific optimizations. Besides reducing the PFT-averaged model-data root-mean-square difference (RMSD) and the associated daily output uncertainty, the optimization improves the simulated CO2 balance at tropical and temperate forests sites. The major site-level NEE adjustments at the seasonal scale are reduced amplitude in C3 grasslands and boreal forests, increased seasonality in temperate evergreen forests, and better model-data phasing in temperate deciduous broadleaf forests. Conversely, the poorer performances in tropical evergreen broadleaf forests points to deficiencies regarding the modelling of phenology and soil water stress for this PFT. An evaluation with data-oriented estimates of photosynthesis (GPP - gross primary productivity) and ecosystem respiration (Reco) rates indicates distinctively improved simulations of both gross fluxes. The multisite parameter sets are then tested against CO2 concentrations measured at 53 locations around the globe, showing significant adjustments of the modelled seasonality of atmospheric CO2 concentration, whose relevance seems PFT-dependent, along with an improved interannual variability. Lastly, a global-scale evaluation with remote sensing NDVI (normalized difference vegetation index) measurements indicates an improvement of the simulated seasonal variations of the foliar cover for all considered PFTs.
Churski, Marcin; Bubnicki, Jakub W; Jędrzejewska, Bogumiła; Kuijper, Dries P J; Cromsigt, Joris P G M
2017-04-01
Plant biomass consumers (mammalian herbivory and fire) are increasingly seen as major drivers of ecosystem structure and function but the prevailing paradigm in temperate forest ecology is still that their dynamics are mainly bottom-up resource-controlled. Using conceptual advances from savanna ecology, particularly the demographic bottleneck model, we present a novel view on temperate forest dynamics that integrates consumer and resource control. We used a fully factorial experiment, with varying levels of ungulate herbivory and resource (light) availability, to investigate how these factors shape recruitment of five temperate tree species. We ran simulations to project how inter- and intraspecific differences in height increment under the different experimental scenarios influence long-term recruitment of tree species. Strong herbivore-driven demographic bottlenecks occurred in our temperate forest system, and bottlenecks were as strong under resource-rich as under resource-poor conditions. Increased browsing by herbivores in resource-rich patches strongly counteracted the increased escape strength of saplings in these patches. This finding is a crucial extension of the demographic bottleneck model which assumes that increased resource availability allows plants to more easily escape consumer-driven bottlenecks. Our study demonstrates that a more dynamic understanding of consumer-resource interactions is necessary, where consumers and plants both respond to resource availability. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Exhaustively sampling peptide adsorption with metadynamics.
Deighan, Michael; Pfaendtner, Jim
2013-06-25
Simulating the adsorption of a peptide or protein and obtaining quantitative estimates of thermodynamic observables remains challenging for many reasons. One reason is the dearth of molecular scale experimental data available for validating such computational models. We also lack simulation methodologies that effectively address the dual challenges of simulating protein adsorption: overcoming strong surface binding and sampling conformational changes. Unbiased classical simulations do not address either of these challenges. Previous attempts that apply enhanced sampling generally focus on only one of the two issues, leaving the other to chance or brute force computing. To improve our ability to accurately resolve adsorbed protein orientation and conformational states, we have applied the Parallel Tempering Metadynamics in the Well-Tempered Ensemble (PTMetaD-WTE) method to several explicitly solvated protein/surface systems. We simulated the adsorption behavior of two peptides, LKα14 and LKβ15, onto two self-assembled monolayer (SAM) surfaces with carboxyl and methyl terminal functionalities. PTMetaD-WTE proved effective at achieving rapid convergence of the simulations, whose results elucidated different aspects of peptide adsorption including: binding free energies, side chain orientations, and preferred conformations. We investigated how specific molecular features of the surface/protein interface change the shape of the multidimensional peptide binding free energy landscape. Additionally, we compared our enhanced sampling technique with umbrella sampling and also evaluated three commonly used molecular dynamics force fields.
Lu, Wei; Fan, Wen Yi; Tian, Tian
2016-05-01
Keeping other parameters as empirical constants, different numerical combinations of the main photosynthetic parameters V c max and J max were conducted to estimate daily GPP by using the iteration method in this paper. To optimize V c max and J max in BEPSHourly model at hourly time steps, simulated daily GPP using different numerical combinations of the parameters were compared with the flux tower data obtained from the temperate deciduous broad-leaved forest of the Maoershan Forest Farm in Northeast China. Comparing the simulated daily GPP with the observed flux data in 2011, the results showed that optimal V c max and J max for the deciduous broad-leaved forest in Northeast China were 41.1 μmol·m -2 ·s -1 and 82.8 μmol·m -2 ·s -1 respectively with the minimal RMSE and the maximum R 2 of 1.10 g C·m -2 ·d -1 and 0.95. After V c max and J max optimization, BEPSHourly model simulated the seasonal variation of GPP better.
Terrestrial biosphere changes over the last 120 kyr
NASA Astrophysics Data System (ADS)
Hoogakker, B. A. A.; Smith, R. S.; Singarayer, J. S.; Marchant, R.; Prentice, I. C.; Allen, J. R. M.; Anderson, R. S.; Bhagwat, S. A.; Behling, H.; Borisova, O.; Bush, M.; Correa-Metrio, A.; de Vernal, A.; Finch, J. M.; Fréchette, B.; Lozano-Garcia, S.; Gosling, W. D.; Granoszewski, W.; Grimm, E. C.; Grüger, E.; Hanselman, J.; Harrison, S. P.; Hill, T. R.; Huntley, B.; Jiménez-Moreno, G.; Kershaw, P.; Ledru, M.-P.; Magri, D.; McKenzie, M.; Müller, U.; Nakagawa, T.; Novenko, E.; Penny, D.; Sadori, L.; Scott, L.; Stevenson, J.; Valdes, P. J.; Vandergoes, M.; Velichko, A.; Whitlock, C.; Tzedakis, C.
2016-01-01
A new global synthesis and biomization of long (> 40 kyr) pollen-data records is presented and used with simulations from the HadCM3 and FAMOUS climate models and the BIOME4 vegetation model to analyse the dynamics of the global terrestrial biosphere and carbon storage over the last glacial-interglacial cycle. Simulated biome distributions using BIOME4 driven by HadCM3 and FAMOUS at the global scale over time generally agree well with those inferred from pollen data. Global average areas of grassland and dry shrubland, desert, and tundra biomes show large-scale increases during the Last Glacial Maximum, between ca. 64 and 74 ka BP and cool substages of Marine Isotope Stage 5, at the expense of the tropical forest, warm-temperate forest, and temperate forest biomes. These changes are reflected in BIOME4 simulations of global net primary productivity, showing good agreement between the two models. Such changes are likely to affect terrestrial carbon storage, which in turn influences the stable carbon isotopic composition of seawater as terrestrial carbon is depleted in 13C.
δ15N constraints on long-term nitrogen balances in temperate forests
Perakis, S.S.; Sinkhorn, E.R.; Compton, J.E.
2011-01-01
Biogeochemical theory emphasizes nitrogen (N) limitation and the many factors that can restrict N accumulation in temperate forests, yet lacks a working model of conditions that can promote naturally high N accumulation. We used a dynamic simulation model of ecosystem N and δ15N to evaluate which combination of N input and loss pathways could produce a range of high ecosystem N contents characteristic of forests in the Oregon Coast Range. Total ecosystem N at nine study sites ranged from 8,788 to 22,667 kg ha−1 and carbon (C) ranged from 188 to 460 Mg ha−1, with highest values near the coast. Ecosystem δ15N displayed a curvilinear relationship with ecosystem N content, and largely reflected mineral soil, which accounted for 96–98% of total ecosystem N. Model simulations of ecosystem N balances parameterized with field rates of N leaching required long-term average N inputs that exceed atmospheric deposition and asymbiotic and epiphytic N2-fixation, and that were consistent with cycles of post-fire N2-fixation by early-successional red alder. Soil water δ15NO3 − patterns suggested a shift in relative N losses from denitrification to nitrate leaching as N accumulated, and simulations identified nitrate leaching as the primary N loss pathway that constrains maximum N accumulation. Whereas current theory emphasizes constraints on biological N2-fixation and disturbance-mediated N losses as factors that limit N accumulation in temperate forests, our results suggest that wildfire can foster substantial long-term N accumulation in ecosystems that are colonized by symbiotic N2-fixing vegetation.
Phosphorus limits Eucalyptus grandis seedling growth in an unburnt rain forest soil
Tng, David Y. P.; Janos, David P.; Jordan, Gregory J.; Weber, Ellen; Bowman, David M. J. S.
2014-01-01
Although rain forest is characterized as pyrophobic, pyrophilic giant eucalypts grow as rain forest emergents in both temperate and tropical Australia. In temperate Australia, such eucalypts depend on extensive, infrequent fires to produce conditions suitable for seedling growth. Little is known, however, about constraints on seedlings of tropical giant eucalypts. We tested whether seedlings of Eucalyptus grandis experience edaphic constraints similar to their temperate counterparts. We hypothesized that phosphorous addition would alleviate edaphic constraints. We grew seedlings in a factorial experiment combining fumigation (to simulate nutrient release and soil pasteurization by fire), soil type (E. grandis forest versus rain forest soil) and phosphorus addition as factors. We found that phosphorus was the principal factor limiting E. grandis seedling survival and growth in rain forest soil, and that fumigation enhanced survival of seedlings in both E. grandis forest and rain forest soil. We conclude that similar to edaphic constraints on temperate giant eucalypts, mineral nutrient and biotic attributes of a tropical rain forest soil may hamper E. grandis seedling establishment. In rain forest soil, E. grandis seedlings benefited from conditions akin to a fire-generated ashbed (i.e., an “ashbed effect”). PMID:25339968
Sliding mode controllers for a tempered glass furnace.
Almutairi, Naif B; Zribi, Mohamed
2016-01-01
This paper investigates the design of two sliding mode controllers (SMCs) applied to a tempered glass furnace system. The main objective of the proposed controllers is to regulate the glass plate temperature, the upper-wall temperature and the lower-wall temperature in the furnace to a common desired temperature. The first controller is a conventional sliding mode controller. The key step in the design of this controller is the introduction of a nonlinear transformation that maps the dynamic model of the tempered glass furnace into the generalized controller canonical form; this step facilitates the design of the sliding mode controller. The second controller is based on a state-dependent coefficient (SDC) factorization of the tempered glass furnace dynamic model. Using an SDC factorization, a simplified sliding mode controller is designed. The simulation results indicate that the two proposed control schemes work very well. Moreover, the robustness of the control schemes to changes in the system's parameters as well as to disturbances is investigated. In addition, a comparison of the proposed control schemes with a fuzzy PID controller is performed; the results show that the proposed SDC-based sliding mode controller gave better results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Adamo, Shelley A; Baker, Jillian L; Lovett, Maggie M E; Wilson, Graham
2012-12-01
Climate change will result in warmer temperatures and an increase in the frequency and severity of extreme weather events. Given that higher temperatures increase the reproductive rate of temperate zone insects, insect population growth rates are predicted to increase in the temperate zone in response to climate. This consensus, however, rests on the assumption that food is freely available. However, under conditions of limited food, the reproductive output of the Texan cricket Gryllus texensis (Cade and Otte) was highest at its current normal average temperature and declined with increasing temperature. Moreover, low food availability decreased survival during a simulated heat wave. Therefore, the effects of climate change on this species, and possibly on many others, are likely to hinge on food availability. Extrapolation from our data suggests that G. texensis will show larger yearly fluctuations in population size as climate change continues, and this will also have ecological repercussions. Only those temperate zone insects with a ready supply of food (e.g., agricultural pests) are likely to experience the predicted increase in population growth in response to climate change; food-limited species are likely to experience a population decline.
Application of Simulated Annealing and Related Algorithms to TWTA Design
NASA Technical Reports Server (NTRS)
Radke, Eric M.
2004-01-01
Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is decremented and the process repeats. Eventually (and hopefully), a near-globally optimal solution is attained as T approaches zero. Several exciting variants of SA have recently emerged, including Discrete-State Simulated Annealing (DSSA) and Simulated Tempering (ST). The DSSA algorithm takes the thermodynamic analogy one step further by categorizing objective function evaluations into discrete states. In doing so, many of the case-specific problems associated with fine-tuning the SA algorithm can be avoided; for example, theoretical approximations for the initial and final temperature can be derived independently of the case. In this manner, DSSA provides a scheme that is more robust with respect to widely differing design surfaces. ST differs from SA in that the temperature T becomes an additional random variable in the optimization. The system is also kept in equilibrium as the temperature changes, as opposed to the system being driven out of equilibrium as temperature changes in SA. ST is designed to overcome obstacles in design surfaces where numerous local minima are separated by high barriers. These algorithms are incorporated into the optimal design of the traveling-wave tube amplifier (TWTA). The area under scrutiny is the collector, in which it would be ideal to use negative potential to decelerate the spent electron beam to zero kinetic energy just as it reaches the collector surface. In reality this is not plausible due to a number of physical limitations, including repulsion and differing levels of kinetic energy among individual electrons. Instead, the collector is designed with multiple stages depressed below ground potential. The design of this multiple-stage collector is the optimization problem of interest. One remaining problem in SA and DSSA is the difficulty in determining when equilibrium has been reached so that the current Markov chain can be terminated. It has been suggested in recent literature that simulating the thermodynamic properties opecific heat, entropy, and internal energy from the Boltzmann distribution can provide good indicators of having reached equilibrium at a certain temperature. These properties are tested for their efficacy and implemented in SA and DSSA code with respect to TWTA collector optimization.
CESM-simulated 21st Century Changes in Large Scale Crop Water Requirements and Yields
NASA Astrophysics Data System (ADS)
Levis, S.; Badger, A.; Drewniak, B. A.; O'Neill, B. C.; Ren, X.
2014-12-01
We assess potential changes in crop water requirements and corresponding yields relative to the late 20th century in major crop producing regions of the world by using the Community Land Model (CLM) driven with 21st century meteorology from RCP8.5 and RCP4.5 Community Earth System Model (CESM) simulations. The RCP4.5 simulation allows us to explore the potential for averted societal impacts when compared to the RCP8.5 simulation. We consider the possibility for increased yields and improved water use efficiency under conditions of elevated atmospheric CO2 due to the CO2 fertilization effect (also known as concentration-carbon feedback). We address uncertainty in the current understanding of plant CO2 fertilization by repeating the simulations with and without the CO2 fertilization effect. Simulations without CO2 fertilization represent the radiative effect of elevated CO2 (i.e., warming) without representing the physiological effect of elevated CO2 (enhanced carbon uptake and increased water use efficiency by plants during photosynthesis). Preliminary results suggest that some plants may suffer from increasing heat and drought in much of the world without the CO2 fertilization effect. On the other hand plants (especially C3) tend to grow more with less water when models include the CO2 fertilization effect. Performing 21st century simulations with and without the CO2 fertilization effect brackets the potential range of outcomes. In this work we use the CLM crop model, which includes specific crop types that differ from the model's default plant functional types in that the crops get planted, harvested, and potentially fertilized and irrigated according to algorithms that attempt to capture human management decisions. We use an updated version of the CLM4.5 that includes cotton, rice, and sugarcane, spring wheat, spring barley, and spring rye, as well as temperate and tropical maize and soybean.
Development of a Dependency Theory Toolbox for Database Design.
1987-12-01
published algorithms and theorems , and hand simulating these algorithms can be a tedious and error prone chore. Additionally, since the process of...to design and study relational databases exists in the form of published algorithms and theorems . However, hand simulating these algorithms can be a...published algorithms and theorems . Hand simulating these algorithms can be a tedious and error prone chore. Therefore, a toolbox of algorithms and
2017-08-10
simulation models the conformational plasticity along the helix-forming reaction coordinate was limited by free - energy barriers. By comparison the coarse...revealed. The latter becomes evident in comparing the energy Z-score landscapes , where CHARMM22 simulation shows a manifold of shuttling...solvent simulations of calculating the charging free energy of protein conformations.33 Deviation to the protocol by modification of Born radii
Fractal Landscape Algorithms for Environmental Simulations
NASA Astrophysics Data System (ADS)
Mao, H.; Moran, S.
2014-12-01
Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.
Extended Hamiltonian approach to continuous tempering
NASA Astrophysics Data System (ADS)
Gobbo, Gianpaolo; Leimkuhler, Benedict J.
2015-06-01
We introduce an enhanced sampling simulation technique based on continuous tempering, i.e., on continuously varying the temperature of the system under investigation. Our approach is mathematically straightforward, being based on an extended Hamiltonian formulation in which an auxiliary degree of freedom, determining the effective temperature, is coupled to the physical system. The physical system and its temperature evolve continuously in time according to the equations of motion derived from the extended Hamiltonian. Due to the Hamiltonian structure, it is easy to show that a particular subset of the configurations of the extended system is distributed according to the canonical ensemble for the physical system at the correct physical temperature.
Deforestation intensifies hot days
NASA Astrophysics Data System (ADS)
Stoy, Paul C.
2018-05-01
Deforestation often increases land-surface and near-surface temperatures, but climate models struggle to simulate this effect. Research now shows that deforestation has increased the severity of extreme heat in temperate regions of North America and Europe. This points to opportunities to mitigate extreme heat.
NASA Astrophysics Data System (ADS)
Thurner, Martin; Beer, Christian; Carvalhais, Nuno; Forkel, Matthias; Tito Rademacher, Tim; Santoro, Maurizio; Tum, Markus; Schmullius, Christiane
2016-04-01
Long-term vegetation dynamics are one of the key uncertainties of the carbon cycle. There are large differences in simulated vegetation carbon stocks and fluxes including productivity, respiration and carbon turnover between global vegetation models. Especially the implementation of climate-related mortality processes, for instance drought, fire, frost or insect effects, is often lacking or insufficient in current models and their importance at global scale is highly uncertain. These shortcomings have been due to the lack of spatially extensive information on vegetation carbon stocks, which cannot be provided by inventory data alone. Instead, we recently have been able to estimate northern boreal and temperate forest carbon stocks based on radar remote sensing data. Our spatially explicit product (0.01° resolution) shows strong agreement to inventory-based estimates at a regional scale and allows for a spatial evaluation of carbon stocks and dynamics simulated by global vegetation models. By combining this state-of-the-art biomass product and NPP datasets originating from remote sensing, we are able to study the relation between carbon turnover rate and a set of climate indices in northern boreal and temperate forests along spatial gradients. We observe an increasing turnover rate with colder winter temperatures and longer winters in boreal forests, suggesting frost damage and the trade-off between frost adaptation and growth being important mortality processes in this ecosystem. In contrast, turnover rate increases with climatic conditions favouring drought and insect outbreaks in temperate forests. Investigated global vegetation models from the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP), including HYBRID4, JeDi, JULES, LPJml, ORCHIDEE, SDGVM, and VISIT, are able to reproduce observation-based spatial climate - turnover rate relationships only to a limited extent. While most of the models compare relatively well in terms of NPP, simulated vegetation carbon stocks are severely biased compared to our biomass dataset. Current limitations lead to considerable uncertainties in the estimated vegetation carbon turnover, contributing substantially to the forest feedback to climate change. Our results are the basis for improving mortality concepts in models and estimating their impact on the land carbon balance.
Multiscale stochastic simulations of chemical reactions with regulated scale separation
NASA Astrophysics Data System (ADS)
Koumoutsakos, Petros; Feigelman, Justin
2013-07-01
We present a coupling of multiscale frameworks with accelerated stochastic simulation algorithms for systems of chemical reactions with disparate propensities. The algorithms regulate the propensities of the fast and slow reactions of the system, using alternating micro and macro sub-steps simulated with accelerated algorithms such as τ and R-leaping. The proposed algorithms are shown to provide significant speedups in simulations of stiff systems of chemical reactions with a trade-off in accuracy as controlled by a regulating parameter. More importantly, the error of the methods exhibits a cutoff phenomenon that allows for optimal parameter choices. Numerical experiments demonstrate that hybrid algorithms involving accelerated stochastic simulations can be, in certain cases, more accurate while faster, than their corresponding stochastic simulation algorithm counterparts.
Carbon and Water Exchanges in a Chronosequence of Temperate White Pine Forest
NASA Astrophysics Data System (ADS)
Arain, M.; Restrepo, N.; Pejam, M.; Khomik, M.
2003-12-01
Quantification of carbon sink or source strengths of temperate forest ecosystems, growing in northern mid-latitudes, is essential to resolve uncertainties in carbon balance of the world's terrestrial ecosystems. Long-term flux measurements are needed to quantify seasonal and annual variability of carbon and water exchanges from these ecosystems and to relate the variability to environmental and physiological factors. Such long-term measurements are of particular interest for different stand developmental stages. An understanding of environmental control factors is necessary to improve predictive capabilities of terrestrial carbon and water cycles. A long-term year-round measurement program has been initiated to observe energy, water vapour, and carbon dioxide fluxes in a chronosequence of white pine (Pinus Strobus) forests in southeastern Canada. White pine is an important species in the North American landscape because of its ability to adapt to dry environments. White pine efficiently grows on coarse and sandy soils, where other deciduous and conifer species cannot survive. Generally, it is the first woody species to flourish after disturbances such as fire and clearing. The climate at the study site is temperate, with a mean annual temperature of 8 degree C and a mean annual precipitation of about 800 mm. The growing season is one of the longest in Canada, with at least 150 frost-free days. Measurements at the site began in June 2002 and are continuing at present. Flux measurements at the 60 year old stand are being made using a close-path eddy covariance (EC) system, while fluxes at the three younger stands (30, 15 and 1 year old) are being measured over 10 to 20 day periods using a roving open-path EC system Soil respiration is being measured every 2-weeks across 50-m transects at all four sites using a mobile chamber system (LI-COR 6400). The mature stand was a sink of carbon with annual NEP value of 140 g C m-2 from June 2002 to May 2003. Gross ecosystem productivity (GEP) and ecosystem respiration (R) for 2002-03 were 1290 and 1150 g C m-2, respectively. A processed-based carbon simulation model was created by incorporating canopy physiology (photosynthesis - sunlit and shaded leaf, conductance), plant phenology (leaf out, senescence), and carbon balance (plant and soil respiration, ecosystem productivity) algorithms in the Canadian Land Surface Scheme. In this study, we compare observed and simulated energy, water vapour, and carbon dioxide fluxes of the mature stand with those of the younger stands. This comparison will help to resolve scaling issues for estimating water and carbon budgets from stands to regions.
Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong
2018-06-01
This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.
NASA Astrophysics Data System (ADS)
Fang, Ye; Feng, Sheng; Tam, Ka-Ming; Yun, Zhifeng; Moreno, Juana; Ramanujam, J.; Jarrell, Mark
2014-10-01
Monte Carlo simulations of the Ising model play an important role in the field of computational statistical physics, and they have revealed many properties of the model over the past few decades. However, the effect of frustration due to random disorder, in particular the possible spin glass phase, remains a crucial but poorly understood problem. One of the obstacles in the Monte Carlo simulation of random frustrated systems is their long relaxation time making an efficient parallel implementation on state-of-the-art computation platforms highly desirable. The Graphics Processing Unit (GPU) is such a platform that provides an opportunity to significantly enhance the computational performance and thus gain new insight into this problem. In this paper, we present optimization and tuning approaches for the CUDA implementation of the spin glass simulation on GPUs. We discuss the integration of various design alternatives, such as GPU kernel construction with minimal communication, memory tiling, and look-up tables. We present a binary data format, Compact Asynchronous Multispin Coding (CAMSC), which provides an additional 28.4% speedup compared with the traditionally used Asynchronous Multispin Coding (AMSC). Our overall design sustains a performance of 33.5 ps per spin flip attempt for simulating the three-dimensional Edwards-Anderson model with parallel tempering, which significantly improves the performance over existing GPU implementations.
Sensitivity of DIVWAG to Variations in Weather Parameters
1976-04-01
1 18. SUPPLEMENTARY NOTES 1 19. KEY WORDS (Continue on reverse aide if necessary and Identify by block number) DIVWAG WAR GAME SIMULATION...simulation of a Division Level War Game , to determine the signif- icance of varying battlefield parameters; i.e., artillery parameters, troop and...The only Red artillery weapons doing better in bad weather are the 130MM guns , but this statistic is tempered by the few casualties occuring in
NASA Astrophysics Data System (ADS)
Dettmer, J.; Quijano, J. E.; Dosso, S. E.; Holland, C. W.; Mandolesi, E.
2016-12-01
Geophysical seabed properties are important for the detection and classification of unexploded ordnance. However, current surveying methods such as vertical seismic profiling, coring, or inversion are of limited use when surveying large areas with high spatial sampling density. We consider surveys based on a source and receiver array towed by an autonomous vehicle which produce large volumes of seabed reflectivity data that contain unprecedented and detailed seabed information. The data are analyzed with a particle filter, which requires efficient reflection-coefficient computation, efficient inversion algorithms and efficient use of computer resources. The filter quantifies information content of multiple sequential data sets by considering results from previous data along the survey track to inform the importance sampling at the current point. Challenges arise from environmental changes along the track where the number of sediment layers and their properties change. This is addressed by a trans-dimensional model in the filter which allows layering complexity to change along a track. Efficiency is improved by likelihood tempering of various particle subsets and including exchange moves (parallel tempering). The filter is implemented on a hybrid computer that combines central processing units (CPUs) and graphics processing units (GPUs) to exploit three levels of parallelism: (1) fine-grained parallel computation of spherical reflection coefficients with a GPU implementation of Levin integration; (2) updating particles by concurrent CPU processes which exchange information using automatic load balancing (coarse grained parallelism); (3) overlapping CPU-GPU communication (a major bottleneck) with GPU computation by staggering CPU access to the multiple GPUs. The algorithm is applied to spherical reflection coefficients for data sets along a 14-km track on the Malta Plateau, Mediterranean Sea. We demonstrate substantial efficiency gains over previous methods. [This research was supported in part by the U.S. Dept of Defense, thought the Strategic Environmental Research and Development Program (SERDP).
Apfelbeck, Beate; Helm, Barbara; Illera, Juan Carlos; Mortega, Kim G; Smiddy, Patrick; Evans, Neil P
2017-05-22
Latitudinal variation in avian life histories falls along a slow-fast pace of life continuum: tropical species produce small clutches, but have a high survival probability, while in temperate species the opposite pattern is found. This study investigated whether differential investment into reproduction and survival of tropical and temperate species is paralleled by differences in the secretion of the vertebrate hormone corticosterone (CORT). Depending on circulating concentrations, CORT can both act as a metabolic (low to medium levels) and a stress hormone (high levels) and, thereby, influence reproductive decisions. Baseline and stress-induced CORT was measured across sequential stages of the breeding season in males and females of closely related taxa of stonechats (Saxicola spp) from a wide distribution area. We compared stonechats from 13 sites, representing Canary Islands, European temperate and East African tropical areas. Stonechats are highly seasonal breeders at all these sites, but vary between tropical and temperate regions with regard to reproductive investment and presumably also survival. In accordance with life-history theory, during parental stages, post-capture (baseline) CORT was overall lower in tropical than in temperate stonechats. However, during mating stages, tropical males had elevated post-capture (baseline) CORT concentrations, which did not differ from those of temperate males. Female and male mates of a pair showed correlated levels of post-capture CORT when sampled after simulated territorial intrusions. In contrast to the hypothesis that species with low reproduction and high annual survival should be more risk-sensitive, tropical stonechats had lower stress-induced CORT concentrations than temperate stonechats. We also found relatively high post-capture (baseline) and stress-induced CORT concentrations, in slow-paced Canary Islands stonechats. Our data support and refine the view that baseline CORT facilitates energetically demanding activities in males and females and reflects investment into reproduction. Low parental workload was associated with lower post-capture (baseline) CORT as expected for a slow pace of life in tropical species. On a finer resolution, however, this tropical-temperate contrast did not generally hold. Post-capture (baseline) CORT was higher during mating stages in particular in tropical males, possibly to support the energetic needs of mate-guarding. Counter to predictions based on life history theory, our data do not confirm the hypothesis that long-lived tropical populations have higher stress-induced CORT concentrations than short-lived temperate populations. Instead, in the predator-rich tropical environments of African stonechats, a dampened stress response during parental stages may increase survival probabilities of young. Overall our data further support an association between life history and baseline CORT, but challenge the role of stress-induced CORT as a mediator of tropical-temperate variation in life history.
Human impact on wildfires varies between regions and with vegetation productivity
NASA Astrophysics Data System (ADS)
Lasslop, Gitta; Kloster, Silvia
2017-11-01
We assess the influence of humans on burned area simulated with a dynamic global vegetation model. The human impact in the model is based on population density and cropland fraction, which were identified as important drivers of burned area in analyses of global datasets, and are commonly used in global models. After an evaluation of the sensitivity to these two variables we extend the model by including an additional effect of the cropland fraction on the fire duration. The general pattern of human influence is similar in both model versions: the strongest human impact is found in regions with intermediate productivity, where fire occurrence is not limited by fuel load or climatic conditions. Human effects in the model increases burned area in the tropics, while in temperate regions burned area is reduced. While the population density is similar on average for the tropical and temperate regions, the cropland fraction is higher in temperate regions, and leads to a strong suppression of fire. The model shows a low human impact in the boreal region, where both population density and cropland fraction is very low and the climatic conditions, as well as the vegetation productivity limit fire. Previous studies attributed a decrease in fire activity found in global charcoal datasets to human activity. This is confirmed by our simulations, which only show a decrease in burned area when the human influence on fire is accounted for, and not with only natural effects on fires. We assess how the vegetation-fire feedback influences the results, by comparing simulations with dynamic vegetation biogeography to simulations with prescribed vegetation. The vegetation-fire feedback increases the human impact on burned area by 10% for present day conditions. These results emphasize that projections of burned area need to account for the interactions between fire, climate, vegetation and humans.
NASA Astrophysics Data System (ADS)
Song, Xia; Hoffman, Forrest M.; Iversen, Colleen M.; Yin, Yunhe; Kumar, Jitendra; Ma, Chun; Xu, Xiaofeng
2017-09-01
Earth system models (ESMs) have been widely used for projecting global vegetation carbon dynamics, yet how well ESMs performed for simulating vegetation carbon density remains untested. We compiled observational data of vegetation carbon density from literature and existing data sets to evaluate nine ESMs at site, biome, latitude, and global scales. Three variables—root (including fine and coarse roots), total vegetation carbon density, and the root:total vegetation carbon ratios (R/T ratios), were chosen for ESM evaluation. ESM models performed well in simulating the spatial distribution of carbon densities in root (
Duality quantum algorithm efficiently simulates open quantum systems
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-01-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855
NASA Astrophysics Data System (ADS)
Ewers, B. E.; Bretfeld, M.; Millar, D.; Hall, J. S.; Beverly, D.; Hall, J. S.; Ogden, F. L.; Mackay, D. S.
2016-12-01
Process-based models of tree impacts on the hydrologic cycle must include not only plant hydraulic limitations but also photosynthetic controls because plants lose water to gain carbon. The Terrestrial Regional Ecosystem Exchange Simulator (TREES) is one such model. TREES includes a Bayesian model-data fusion approach that provides rigorous tests of patterns in tree transpiration data against biophysical processes in the model. TREES has been extensively tested against many temperate tree data sets including those experiencing severe and lethal drought. We test TREES against data from sap flow-scaled transpiration in 76 tropical trees (representing 42 different species) in secondary forests of three different ages (8, 25, and 80+ years) located in the Panama Canal Watershed. These data were collected during the third driest El Niño-Southern Oscillation (ENSO) event on record in Panama during 2015/2016. Tree transpiration response to vapor pressure deficit and solar radiation was the same in the two older forests, but showed an additional response to limited soil moisture in the youngest forest. Volumetric water content at 30 and 50 cm depths was 8% lower in the 8 year old forest than in the 80+ year old forest. TREES could not simulate this difference in soil moisture without increasing simulated root area. TREES simulations were improved by including light response curves of leaf photosynthesis, root vulnerability to cavitation and canopy position impacts on light. TREES was able to simulate the anisohydric (loose stomatal regulation of leaf water potential) and isohydric (tight stomatal regulation) of the 73 trees species a priori indicating that species level information is not required. Analyses of posterior probability distributions indicates TREES model predictions of individual tree transpiration would likely be improved with more detailed root and soil moisture in all forest ages data with the most improvement likely in the 8 year old forest. Our results suggest that a biophysical tree transpiration model developed in temperate forests can be applied to the tropics and could be used to improve predictions of evapotranspiration from changing land cover in tropical hydrology models.
Synchronization Algorithms for Co-Simulation of Power Grid and Communication Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciraci, Selim; Daily, Jeffrey A.; Agarwal, Khushbu
2014-09-11
The ongoing modernization of power grids consists of integrating them with communication networks in order to achieve robust and resilient control of grid operations. To understand the operation of the new smart grid, one approach is to use simulation software. Unfortunately, current power grid simulators at best utilize inadequate approximations to simulate communication networks, if at all. Cooperative simulation of specialized power grid and communication network simulators promises to more accurately reproduce the interactions of real smart grid deployments. However, co-simulation is a challenging problem. A co-simulation must manage the exchange of informa- tion, including the synchronization of simulator clocks,more » between all simulators while maintaining adequate computational perfor- mance. This paper describes two new conservative algorithms for reducing the overhead of time synchronization, namely Active Set Conservative and Reactive Conservative. We provide a detailed analysis of their performance characteristics with respect to the current state of the art including both conservative and optimistic synchronization algorithms. In addition, we provide guidelines for selecting the appropriate synchronization algorithm based on the requirements of the co-simulation. The newly proposed algorithms are shown to achieve as much as 14% and 63% im- provement, respectively, over the existing conservative algorithm.« less
Yang, Y Isaac; Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin
2016-03-07
In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ - ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C-H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.
NASA Astrophysics Data System (ADS)
Yang, Y. Isaac; Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin
2016-03-01
In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ - ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y. Isaac; Zhang, Jun; Che, Xing
2016-03-07
In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence ofmore » the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ − ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Zheng, E-mail: 19994035@sina.com; Wang, Jun; Zhou, Bihua
2014-03-15
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented tomore » tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.« less
Scaling Laws in Arctic Permafrost River Basins: Statistical Signature in Transition
NASA Astrophysics Data System (ADS)
Rowland, J. C.; Gangodagamage, C.; Wilson, C. J.; Prancevic, J. P.; Brumby, S. P.; Marsh, P.; Crosby, B. T.
2011-12-01
The Arctic landscape has been shown to be fundamentally different from the temperate landscape in many ways. Long winters and cold temperatures have led to the development of permafrost, perennially frozen ground, that controls geomorphic processes and the structure of the Arctic landscape. Climate warming is causing changes in permafrost and the active layer (the seasonally thawed surface layer) that is driving an increase in thermal erosion including thermokarst (collapsed soil), retrogressive thaw slumps, and gullies. These geomorphic anomalies in the arctic landscapes have not been well quantified, even though some of the landscape geomorphic and hydrologic characteristics and changes are detectable by our existing sensor networks. We currently lack understanding of the fundamental fluvio-thermal-erosional processes that underpin Arctic landscape structure and form, which limits our ability to develop models to predict the landscape response to current and future climate change. In this work, we seek a unified framework that can explain why permafrost landscapes are different from temperate landscapes. We use high resolution LIDAR data to analyze arctic geomorphic processes at a scale of less than a 1 m and demonstrate our ability to quantify the fundamental difference in the arctic landscape. We first simulate the arctic hillslopes from a stochastic space-filling network and demonstrate that the flow-path convergent properties of arctic landscape can be effectively captured from this simple model, where the simple model represents a landscape flowpath arrangement on a relatively impervious frozen soil layer. Further, we use a novel data processing algorithm to analyze landscape attributes such as slope, curvature, flow-accumulation, elevation-drops and other geomorphic properties, and show that the pattern of diffusion and advection dominated soil transport processes (diffusion/advection regime transition) in the arctic landscape is substantially different from the pattern in temperate landscapes. Our results suggest that Arctic landscapes are characterized by relatively undissected, long planar hillslopes, which convey sediment to quasi-fluvial valleys through long (~ 1 km) flow-paths. Further, we also document that broad planar hillslopes abruptly converge, forcing rapid subsurface flow accumulation at channel heads. This topographic characteristic can successfully be used to explain the position of erosion features. Finally we estimate the landscape model parameters for the arctic landscape that can be successfully used to model development and validation purposes.
Annual Changes of Paddy Rice Planting Areas in Northeastern Asia from MODIS images in 2000-2014
NASA Astrophysics Data System (ADS)
Xiao, X.; Zhang, G.; Dong, J.; Menarguez, M. A.; Kou, W.; Jin, C.; Qin, Y.; Zhou, Y.; Wang, J.; Moore, B., III
2014-12-01
Knowledge of the area and spatial distribution of paddy rice is important for assessment of food security, management of water resources, estimation of greenhouse gas (methane) emissions, and understanding avian influenza virus transmission. Over the past two decades, paddy rice cultivation has expanded northward in temperate and cold temperate zones, particularly in Northeastern China. There is a need to quantify and map changes in paddy rice planting areas in Northeastern Asia (Japan, North and South Korea, and northeast China) at annual interval. We developed a pixel- and phenology-based image analysis system, MODIS-RICE, to map the paddy rice in Northeastern Asia by using multi-temporal MODIS thermal and surface reflectance imagery. Paddy rice fields during the flooding and transplanting phases have unique physical and spectral characteristics, which make it possible for the development of an automated and robust algorithm to track flooding and transplanting phases of paddy rice fields over time. In this presentation, we will show the MODIS-based annual maps of paddy rice planting area in the Northeastern Asia from 2000-2014 (500-m spatial resolution). Accuracy assessments using high-resolution images show that the resultant paddy rice map of Northeastern Asia had a comparable accuracy to the existing products, including 2010 Landsat-based National Land Cover Dataset (NLCD) of China, the 2010 RapidEye-based paddy rice map in North Korea, and the 2010 AVNIR-2-based National Land Cover Dataset in Japan in terms of both area and spatial pattern of paddy rice. This study has demonstrated that our novel MODIS-Rice system, which use both thermal and optical MODIS data over a year, are simple and robust tools to identify and map paddy rice fields in temperate and cold temperate zones.
Lampoudi, Sotiria; Gillespie, Dan T; Petzold, Linda R
2009-03-07
The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.
2009-06-01
additionally be utilized to cover a wider spectral range. In recent years, the long-wave IR ( LWIR : 8–12 m) region of the electromagnetic spectrum has been... LWIR region, and they can be sensed by their apparent temper- atures and spectral signatures in the LWIR . Currently, there are three main material...technologies for photonic IR photodetectors in the LWIR region. The HgCdTe (MCT) detector is the current state of the art due to its high responsivity
NASA Astrophysics Data System (ADS)
Thomas, R. Q.; Williams, M.
2014-09-01
Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System Modeling community. However, there is little understanding of the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants. Here we describe a new, simple model of ecosystem C-N cycling and interactions (ACONITE), that builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C : N, N fixation, and plant C use efficiency) based on the outcome of assessments of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C : N differed among the three ecosystem types (temperate deciduous < tropical evergreen < temperature evergreen), a result that compared well to observations from a global database describing plant traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. Simulated N fixation at steady-state, calculated based on relative demand for N and the marginal return on C investment to acquire N, was an order of magnitude higher in the tropical forest than in the temperate forest, consistent with observations. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C : N. A parameter governing how photosynthesis scales with day length had the largest influence on total vegetation C, GPP, and NPP. Multiple parameters associated with photosynthesis, respiration, and N uptake influenced the rate of N fixation. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C : N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on coupled C-N dynamics has potential for use in research that uses data-assimilation methods to integrate data on both the C and N cycles to improve C flux forecasts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cowan, Nicolas B.; Voigt, Aiko; Abbot, Dorian S., E-mail: n-cowan@nortwestern.edu
In order to understand the climate on terrestrial planets orbiting nearby Sun-like stars, one would like to know their thermal inertia. We use a global climate model to simulate the thermal phase variations of Earth analogs and test whether these data could distinguish between planets with different heat storage and heat transport characteristics. In particular, we consider a temperate climate with polar ice caps (like the modern Earth) and a snowball state where the oceans are globally covered in ice. We first quantitatively study the periodic radiative forcing from, and climatic response to, rotation, obliquity, and eccentricity. Orbital eccentricity andmore » seasonal changes in albedo cause variations in the global-mean absorbed flux. The responses of the two climates to these global seasons indicate that the temperate planet has 3 Multiplication-Sign the bulk heat capacity of the snowball planet due to the presence of liquid water oceans. The obliquity seasons in the temperate simulation are weaker than one would expect based on thermal inertia alone; this is due to cross-equatorial oceanic and atmospheric energy transport. Thermal inertia and cross-equatorial heat transport have qualitatively different effects on obliquity seasons, insofar as heat transport tends to reduce seasonal amplitude without inducing a phase lag. For an Earth-like planet, however, this effect is masked by the mixing of signals from low thermal inertia regions (sea ice and land) with that from high thermal inertia regions (oceans), which also produces a damped response with small phase lag. We then simulate thermal light curves as they would appear to a high-contrast imaging mission (TPF-I/Darwin). In order of importance to the present simulations, which use modern-Earth orbital parameters, the three drivers of thermal phase variations are (1) obliquity seasons, (2) diurnal cycle, and (3) global seasons. Obliquity seasons are the dominant source of phase variations for most viewing angles. A pole-on observer would measure peak-to-trough amplitudes of 13% and 47% for the temperate and snowball climates, respectively. Diurnal heating is important for equatorial observers ({approx}5% phase variations), because the obliquity effects cancel to first order from that vantage. Finally, we compare the prospects of optical versus thermal direct imaging missions for constraining the climate on exoplanets and conclude that while zero- and one-dimensional models are best served by thermal measurements, second-order models accounting for seasons and planetary thermal inertia would require both optical and thermal observations.« less
NASA Astrophysics Data System (ADS)
Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas
2017-04-01
The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.
Modelling the spread of innovation in wild birds.
Shultz, Thomas R; Montrey, Marcel; Aplin, Lucy M
2017-06-01
We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms. © 2017 The Author(s).
NASA Technical Reports Server (NTRS)
Jain, A.; Man, G. K.
1993-01-01
This paper describes the Dynamics Algorithms for Real-Time Simulation (DARTS) real-time hardware-in-the-loop dynamics simulator for the National Aeronautics and Space Administration's Cassini spacecraft. The spacecraft model consists of a central flexible body with a number of articulated rigid-body appendages. The demanding performance requirements from the spacecraft control system require the use of a high fidelity simulator for control system design and testing. The DARTS algorithm provides a new algorithmic and hardware approach to the solution of this hardware-in-the-loop simulation problem. It is based upon the efficient spatial algebra dynamics for flexible multibody systems. A parallel and vectorized version of this algorithm is implemented on a low-cost, multiprocessor computer to meet the simulation timing requirements.
NASA Astrophysics Data System (ADS)
Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong
2016-07-01
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.
Synchronisation under shocks: The Lévy Kuramoto model
NASA Astrophysics Data System (ADS)
Roberts, Dale; Kalloniatis, Alexander C.
2018-04-01
We study the Kuramoto model of identical oscillators on Erdős-Rényi (ER) and Barabasi-Alberts (BA) scale free networks examining the dynamics when perturbed by a Lévy noise. Lévy noise exhibits heavier tails than Gaussian while allowing for their tempering in a controlled manner. This allows us to understand how 'shocks' influence individual oscillator and collective system behaviour of a paradigmatic complex system. Skewed α-stable Lévy noise, equivalent to fractional diffusion perturbations, are considered, but overlaid by exponential tempering of rate λ. In an earlier paper we found that synchrony takes a variety of forms for identical Kuramoto oscillators subject to stable Lévy noise, not seen for the Gaussian case, and changing with α: a noise-induced drift, a smooth α dependence of the point of cross-over of synchronisation point of ER and BA networks, and a severe loss of synchronisation at low values of α. In the presence of tempering we observe both analytically and numerically a dramatic change to the α < 1 behaviour where synchronisation is sustained over a larger range of values of the 'noise strength' σ, improved compared to the α > 1 tempered cases. Analytically we study the system close to the phase synchronised fixed point and solve the tempered fractional Fokker-Planck equation. There we observe that densities show stronger support in the basin of attraction at low α for fixed coupling, σ and tempering λ. We then perform numerical simulations for networks of size N = 1000 and average degree d ¯ = 10. There, we compute the order parameter r as a function of σ for fixed α and λ and observe values of r ≈ 1 over larger ranges of σ for α < 1 and λ ≠ 0. In addition we observe drift of both positive and negative slopes for different α and λ when native frequencies are equal, and confirm a sustainment of synchronisation down to low values of α. We propose a mechanism for this in terms of the basic shape of the tempered stable Lévy densities for various α and how it feeds into Kuramoto oscillator dynamics and illustrate this with examples of specific paths.
Competitive evaluation of failure detection algorithms for strapdown redundant inertial instruments
NASA Technical Reports Server (NTRS)
Wilcox, J. C.
1973-01-01
Algorithms for failure detection, isolation, and correction of redundant inertial instruments in the strapdown dodecahedron configuration are competitively evaluated in a digital computer simulation that subjects them to identical environments. Their performance is compared in terms of orientation and inertial velocity errors and in terms of missed and false alarms. The algorithms appear in the simulation program in modular form, so that they may be readily extracted for use elsewhere. The simulation program and its inputs and outputs are described. The algorithms, along with an eight algorithm that was not simulated, also compared analytically to show the relationships among them.
Freezing Transition Studies Through Constrained Cell Model Simulation
NASA Astrophysics Data System (ADS)
Nayhouse, Michael; Kwon, Joseph Sang-Il; Heng, Vincent R.; Amlani, Ankur M.; Orkoulas, G.
2014-10-01
In the present work, a simulation method based on cell models is used to deduce the fluid-solid transition of a system of particles that interact via a pair potential, , which is of the form with . The simulations are implemented under constant-pressure conditions on a generalized version of the constrained cell model. The constrained cell model is constructed by dividing the volume into Wigner-Seitz cells and confining each particle in a single cell. This model is a special case of a more general cell model which is formed by introducing an additional field variable that controls the number of particles per cell and, thus, the relative stability of the solid against the fluid phase. High field values force configurations with one particle per cell and thus favor the solid phase. Fluid-solid coexistence on the isotherm that corresponds to a reduced temperature of 2 is determined from constant-pressure simulations of the generalized cell model using tempering and histogram reweighting techniques. The entire fluid-solid phase boundary is determined through a thermodynamic integration technique based on histogram reweighting, using the previous coexistence point as a reference point. The vapor-liquid phase diagram is obtained from constant-pressure simulations of the unconstrained system using tempering and histogram reweighting. The phase diagram of the system is found to contain a stable critical point and a triple point. The phase diagram of the corresponding constrained cell model is also found to contain both a stable critical point and a triple point.
Simulator for concurrent processing data flow architectures
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.; Stoughton, John W.; Mielke, Roland R.
1992-01-01
A software simulator capability of simulating execution of an algorithm graph on a given system under the Algorithm to Architecture Mapping Model (ATAMM) rules is presented. ATAMM is capable of modeling the execution of large-grained algorithms on distributed data flow architectures. Investigating the behavior and determining the performance of an ATAMM based system requires the aid of software tools. The ATAMM Simulator presented is capable of determining the performance of a system without having to build a hardware prototype. Case studies are performed on four algorithms to demonstrate the capabilities of the ATAMM Simulator. Simulated results are shown to be comparable to the experimental results of the Advanced Development Model System.
Assessment of Chlorophyll-a Algorithms Considering Different Trophic Statuses and Optimal Bands.
Salem, Salem Ibrahim; Higa, Hiroto; Kim, Hyungjun; Kobayashi, Hiroshi; Oki, Kazuo; Oki, Taikan
2017-07-31
Numerous algorithms have been proposed to retrieve chlorophyll- a concentrations in Case 2 waters; however, the retrieval accuracy is far from satisfactory. In this research, seven algorithms are assessed with different band combinations of multispectral and hyperspectral bands using linear (LN), quadratic polynomial (QP) and power (PW) regression approaches, resulting in altogether 43 algorithmic combinations. These algorithms are evaluated by using simulated and measured datasets to understand the strengths and limitations of these algorithms. Two simulated datasets comprising 500,000 reflectance spectra each, both based on wide ranges of inherent optical properties (IOPs), are generated for the calibration and validation stages. Results reveal that the regression approach (i.e., LN, QP, and PW) has more influence on the simulated dataset than on the measured one. The algorithms that incorporated linear regression provide the highest retrieval accuracy for the simulated dataset. Results from simulated datasets reveal that the 3-band (3b) algorithm that incorporate 665-nm and 680-nm bands and band tuning selection approach outperformed other algorithms with root mean square error (RMSE) of 15.87 mg·m -3 , 16.25 mg·m -3 , and 19.05 mg·m -3 , respectively. The spatial distribution of the best performing algorithms, for various combinations of chlorophyll- a (Chla) and non-algal particles (NAP) concentrations, show that the 3b_tuning_QP and 3b_680_QP outperform other algorithms in terms of minimum RMSE frequency of 33.19% and 60.52%, respectively. However, the two algorithms failed to accurately retrieve Chla for many combinations of Chla and NAP, particularly for low Chla and NAP concentrations. In addition, the spatial distribution emphasizes that no single algorithm can provide outstanding accuracy for Chla retrieval and that multi-algorithms should be included to reduce the error. Comparing the results of the measured and simulated datasets reveal that the algorithms that incorporate the 665-nm band outperform other algorithms for measured dataset (RMSE = 36.84 mg·m -3 ), while algorithms that incorporate the band tuning approach provide the highest retrieval accuracy for the simulated dataset (RMSE = 25.05 mg·m -3 ).
Assessment of Chlorophyll-a Algorithms Considering Different Trophic Statuses and Optimal Bands
Higa, Hiroto; Kobayashi, Hiroshi; Oki, Kazuo
2017-01-01
Numerous algorithms have been proposed to retrieve chlorophyll-a concentrations in Case 2 waters; however, the retrieval accuracy is far from satisfactory. In this research, seven algorithms are assessed with different band combinations of multispectral and hyperspectral bands using linear (LN), quadratic polynomial (QP) and power (PW) regression approaches, resulting in altogether 43 algorithmic combinations. These algorithms are evaluated by using simulated and measured datasets to understand the strengths and limitations of these algorithms. Two simulated datasets comprising 500,000 reflectance spectra each, both based on wide ranges of inherent optical properties (IOPs), are generated for the calibration and validation stages. Results reveal that the regression approach (i.e., LN, QP, and PW) has more influence on the simulated dataset than on the measured one. The algorithms that incorporated linear regression provide the highest retrieval accuracy for the simulated dataset. Results from simulated datasets reveal that the 3-band (3b) algorithm that incorporate 665-nm and 680-nm bands and band tuning selection approach outperformed other algorithms with root mean square error (RMSE) of 15.87 mg·m−3, 16.25 mg·m−3, and 19.05 mg·m−3, respectively. The spatial distribution of the best performing algorithms, for various combinations of chlorophyll-a (Chla) and non-algal particles (NAP) concentrations, show that the 3b_tuning_QP and 3b_680_QP outperform other algorithms in terms of minimum RMSE frequency of 33.19% and 60.52%, respectively. However, the two algorithms failed to accurately retrieve Chla for many combinations of Chla and NAP, particularly for low Chla and NAP concentrations. In addition, the spatial distribution emphasizes that no single algorithm can provide outstanding accuracy for Chla retrieval and that multi-algorithms should be included to reduce the error. Comparing the results of the measured and simulated datasets reveal that the algorithms that incorporate the 665-nm band outperform other algorithms for measured dataset (RMSE = 36.84 mg·m−3), while algorithms that incorporate the band tuning approach provide the highest retrieval accuracy for the simulated dataset (RMSE = 25.05 mg·m−3). PMID:28758984
Characterisation of VOC, SVOC, and PM emissions from peat burnt in laboratory simulations
Peat, or organic soil, is a vast store of organic carbon, widely distributed from polar temperate to equatorial regions. Drainage for agriculture and drought are drying vast areas of peat, exposing it to increasing fire risk, which may be exacerbated by climate change. This has ...
Workflow of the Grover algorithm simulation incorporating CUDA and GPGPU
NASA Astrophysics Data System (ADS)
Lu, Xiangwen; Yuan, Jiabin; Zhang, Weiwei
2013-09-01
The Grover quantum search algorithm, one of only a few representative quantum algorithms, can speed up many classical algorithms that use search heuristics. No true quantum computer has yet been developed. For the present, simulation is one effective means of verifying the search algorithm. In this work, we focus on the simulation workflow using a compute unified device architecture (CUDA). Two simulation workflow schemes are proposed. These schemes combine the characteristics of the Grover algorithm and the parallelism of general-purpose computing on graphics processing units (GPGPU). We also analyzed the optimization of memory space and memory access from this perspective. We implemented four programs on CUDA to evaluate the performance of schemes and optimization. Through experimentation, we analyzed the organization of threads suited to Grover algorithm simulations, compared the storage costs of the four programs, and validated the effectiveness of optimization. Experimental results also showed that the distinguished program on CUDA outperformed the serial program of libquantum on a CPU with a speedup of up to 23 times (12 times on average), depending on the scale of the simulation.
An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming
2017-02-01
In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.
Synchronization Of Parallel Discrete Event Simulations
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S.
1992-01-01
Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.
Effectiveness of simulation for improvement in self-efficacy among novice nurses: a meta-analysis.
Franklin, Ashley E; Lee, Christopher S
2014-11-01
The influence of simulation on self-efficacy for novice nurses has been reported inconsistently in the literature. Effect sizes across studies were synthesized using random-effects meta-analyses. Simulation improved self-efficacy in one-group, pretest-posttest studies (Hedge's g=1.21, 95% CI [0.63, 1.78]; p<0.001). Simulation also was favored over control teaching interventions in improving self-efficacy in studies with experimental designs (Hedge's g=0.27, 95% CI [0.1, 0.44]; p=0.002). In nonexperimental designs, consistent conclusions about the influence of simulation were tempered by significant between-study differences in effects. Simulation is effective at increasing self-efficacy among novice nurses, compared with traditional control groups. Copyright 2014, SLACK Incorporated.
Selected-node stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Duso, Lorenzo; Zechner, Christoph
2018-04-01
Stochastic simulations of biochemical networks are of vital importance for understanding complex dynamics in cells and tissues. However, existing methods to perform such simulations are associated with computational difficulties and addressing those remains a daunting challenge to the present. Here we introduce the selected-node stochastic simulation algorithm (snSSA), which allows us to exclusively simulate an arbitrary, selected subset of molecular species of a possibly large and complex reaction network. The algorithm is based on an analytical elimination of chemical species, thereby avoiding explicit simulation of the associated chemical events. These species are instead described continuously in terms of statistical moments derived from a stochastic filtering equation, resulting in a substantial speedup when compared to Gillespie's stochastic simulation algorithm (SSA). Moreover, we show that statistics obtained via snSSA profit from a variance reduction, which can significantly lower the number of Monte Carlo samples needed to achieve a certain performance. We demonstrate the algorithm using several biological case studies for which the simulation time could be reduced by orders of magnitude.
GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.
Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik
2008-03-01
Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchetti, Luca, E-mail: marchetti@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; University of Trento, Department of Mathematics
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance andmore » accuracy of HRSSA against other state of the art algorithms.« less
Image reconstruction through thin scattering media by simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Zhang, Xicheng; Zhu, Jianhua
2018-07-01
An idea for reconstructing the image of an object behind thin scattering media is proposed by phase modulation. The optimized phase mask is achieved by modulating the scattered light using simulated annealing algorithm. The correlation coefficient is exploited as a fitness function to evaluate the quality of reconstructed image. The reconstructed images optimized from simulated annealing algorithm and genetic algorithm are compared in detail. The experimental results show that our proposed method has better definition and higher speed than genetic algorithm.
An algorithm for the automatic synchronization of Omega receivers
NASA Technical Reports Server (NTRS)
Stonestreet, W. M.; Marzetta, T. L.
1977-01-01
The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.
Absolute Humidity and the Seasonality of Influenza (Invited)
NASA Astrophysics Data System (ADS)
Shaman, J. L.; Pitzer, V.; Viboud, C.; Grenfell, B.; Goldstein, E.; Lipsitch, M.
2010-12-01
Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent re-analysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here we show that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions. In addition, we show that variations of the basic and effective reproductive numbers for influenza, caused by seasonal changes in absolute humidity, are consistent with the general timing of pandemic influenza outbreaks observed for 2009 A/H1N1 in temperate regions. Indeed, absolute humidity conditions correctly identify the region of the United States vulnerable to a third, wintertime wave of pandemic influenza. These findings suggest that the timing of pandemic influenza outbreaks is controlled by a combination of absolute humidity conditions, levels of susceptibility and changes in population mixing and contact rates.
Approximate ground states of the random-field Potts model from graph cuts
NASA Astrophysics Data System (ADS)
Kumar, Manoj; Kumar, Ravinder; Weigel, Martin; Banerjee, Varsha; Janke, Wolfhard; Puri, Sanjay
2018-05-01
While the ground-state problem for the random-field Ising model is polynomial, and can be solved using a number of well-known algorithms for maximum flow or graph cut, the analog random-field Potts model corresponds to a multiterminal flow problem that is known to be NP-hard. Hence an efficient exact algorithm is very unlikely to exist. As we show here, it is nevertheless possible to use an embedding of binary degrees of freedom into the Potts spins in combination with graph-cut methods to solve the corresponding ground-state problem approximately in polynomial time. We benchmark this heuristic algorithm using a set of quasiexact ground states found for small systems from long parallel tempering runs. For a not-too-large number q of Potts states, the method based on graph cuts finds the same solutions in a fraction of the time. We employ the new technique to analyze the breakup length of the random-field Potts model in two dimensions.
Jolly, William M; Nemani, Ramakrishna; Running, Steven W
2004-09-01
Some saplings and shrubs growing in the understory of temperate deciduous forests extend their periods of leaf display beyond that of the overstory, resulting in periods when understory radiation, and hence productivity, are not limited by the overstory canopy. To assess the importance of the duration of leaf display on the productivity of understory and overstory trees of deciduous forests in the north eastern United States, we applied the simulation model, BIOME-BGC with climate data for Hubbard Brook Experimental Forest, New Hampshire, USA and mean ecophysiological data for species of deciduous, temperate forests. Extension of the overstory leaf display period increased overstory leaf area index (LAI) by only 3 to 4% and productivity by only 2 to 4%. In contrast, extending the growing season of the understory relative to the overstory by one week in both spring and fall, increased understory LAI by 35% and productivity by 32%. A 2-week extension of the growing period in both spring and fall increased understory LAI by 53% and productivity by 55%.
Constitutive Model Constants for Al7075-T651 and Al7075-T6
NASA Astrophysics Data System (ADS)
Brar, N. S.; Joshi, V. S.; Harris, B. W.
2009-12-01
Aluminum 7075-T651 and 7075-T6 are characterized at quasi-static and high strain rates to determine Johnson-Cook (J-C) strength and fracture model constants. Constitutive model constants are required as input to computer codes to simulate projectile (fragment) impact or similar impact events on structural components made of these materials. Although the two tempers show similar elongation at breakage, the ultimate tensile strength of T651 temper is generally lower than the T6 temper. Johnson-Cook strength model constants (A, B, n, C, and m) for the two alloys are determined from high strain rate tension stress-strain data at room and high temperature to 250°C. The Johnson-Cook fracture model constants are determined from quasi-static and medium strain rate as well as high temperature tests on notched and smooth tension specimens. Although the J-C strength model constants are similar, the fracture model constants show wide variations. Details of the experimental method used and the results for the two alloys are presented.
NASA Astrophysics Data System (ADS)
Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz
2015-02-01
In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks
Vestergaard, Christian L.; Génois, Mathieu
2015-01-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.
Vestergaard, Christian L; Génois, Mathieu
2015-10-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.
Optimal estimates of free energies from multistate nonequilibrium work data.
Maragakis, Paul; Spichty, Martin; Karplus, Martin
2006-03-17
We derive the optimal estimates of the free energies of an arbitrary number of thermodynamic states from nonequilibrium work measurements; the work data are collected from forward and reverse switching processes and obey a fluctuation theorem. The maximum likelihood formulation properly reweights all pathways contributing to a free energy difference and is directly applicable to simulations and experiments. We demonstrate dramatic gains in efficiency by combining the analysis with parallel tempering simulations for alchemical mutations of model amino acids.
A theoretical comparison of evolutionary algorithms and simulated annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications formore » the performance of a variety of other optimization algorithm.« less
QCE: A Simulator for Quantum Computer Hardware
NASA Astrophysics Data System (ADS)
Michielsen, Kristel; de Raedt, Hans
2003-09-01
The Quantum Computer Emulator (QCE) described in this paper consists of a simulator of a generic, general purpose quantum computer and a graphical user interface. The latter is used to control the simulator, to define the hardware of the quantum computer and to debug and execute quantum algorithms. QCE runs in a Windows 98/NT/2000/ME/XP environment. It can be used to validate designs of physically realizable quantum processors and as an interactive educational tool to learn about quantum computers and quantum algorithms. A detailed exposition is given of the implementation of the CNOT and the Toffoli gate, the quantum Fourier transform, Grover's database search algorithm, an order finding algorithm, Shor's algorithm, a three-input adder and a number partitioning algorithm. We also review the results of simulations of an NMR-like quantum computer.
Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.
1997-01-01
The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Xia; Hoffman, Forrest M.; Iversen, Colleen M.
Earth system models (ESMs) have been widely used for projecting global vegetation carbon dynamics, yet how well ESMs performed for simulating vegetation carbon density remains untested. Here we have compiled observational data of vegetation carbon density from literature and existing data sets to evaluate nine ESMs at site, biome, latitude, and global scales. Three variables—root (including fine and coarse roots), total vegetation carbon density, and the root:total vegetation carbon ratios (R/T ratios), were chosen for ESM evaluation. ESM models performed well in simulating the spatial distribution of carbon densities in root (r = 0.71) and total vegetation (r = 0.62).more » However, ESM models had significant biases in simulating absolute carbon densities in root and total vegetation biomass across the majority of land ecosystems, especially in tropical and arctic ecosystems. Particularly, ESMs significantly overestimated carbon density in root (183%) and total vegetation biomass (167%) in climate zones of 10°S–10°N. Substantial discrepancies between modeled and observed R/T ratios were found: the R/T ratios from ESMs were relatively constant, approximately 0.2 across all ecosystems, along latitudinal gradients, and in tropic, temperate, and arctic climatic zones, which was significantly different from the observed large variations in the R/T ratios (0.1–0.8). There were substantial inconsistencies between ESM-derived carbon density in root and total vegetation biomass and the R/T ratio at multiple scales, indicating urgent needs for model improvements on carbon allocation algorithms and more intensive field campaigns targeting carbon density in all key vegetation components.« less
Song, Xia; Hoffman, Forrest M.; Iversen, Colleen M.; ...
2017-09-09
Earth system models (ESMs) have been widely used for projecting global vegetation carbon dynamics, yet how well ESMs performed for simulating vegetation carbon density remains untested. Here we have compiled observational data of vegetation carbon density from literature and existing data sets to evaluate nine ESMs at site, biome, latitude, and global scales. Three variables—root (including fine and coarse roots), total vegetation carbon density, and the root:total vegetation carbon ratios (R/T ratios), were chosen for ESM evaluation. ESM models performed well in simulating the spatial distribution of carbon densities in root (r = 0.71) and total vegetation (r = 0.62).more » However, ESM models had significant biases in simulating absolute carbon densities in root and total vegetation biomass across the majority of land ecosystems, especially in tropical and arctic ecosystems. Particularly, ESMs significantly overestimated carbon density in root (183%) and total vegetation biomass (167%) in climate zones of 10°S–10°N. Substantial discrepancies between modeled and observed R/T ratios were found: the R/T ratios from ESMs were relatively constant, approximately 0.2 across all ecosystems, along latitudinal gradients, and in tropic, temperate, and arctic climatic zones, which was significantly different from the observed large variations in the R/T ratios (0.1–0.8). There were substantial inconsistencies between ESM-derived carbon density in root and total vegetation biomass and the R/T ratio at multiple scales, indicating urgent needs for model improvements on carbon allocation algorithms and more intensive field campaigns targeting carbon density in all key vegetation components.« less
2014-06-12
interferometry and polarimetry . In the paper, the model was used to simulate SAR data for Mangrove (tropical) and Nezer (temperate) forests for P-band and...Scattering Model Applied to Radiometry, Interferometry, and Polarimetry at P- and L-Band. IEEE Transactions on Geoscience and Remote Sensing 44(4): 849
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
PARTIAL INHIBITION OF IN VITRO POLLEN GERMINATION BY SIMULATED SOLAR ULTRAVIOLET-B RADIATION
Pollen from four temperate-latitude taxa were treated with UV radiation in a portion of the UV-B (280-320 nm) waveband during in vitro germination. Inhibition of germination was noted in this pollen compared to samples treated identically except for the exclusion of the UV-B port...
Impact of seasonality on artificial drainage discharge under temperate climate conditions
Ulrike Hirt; Annett Wetzig; Devandra Amatya; Marisa Matranga
2011-01-01
Artificial drainage systems affect all components of the water and matter balance. For the proper simulation of water and solute fluxes, information is needed about artificial drainage discharge rates and their response times. However, there is relatively little information available about the response of artificial drainage systems to precipitation. To address this...
Yang, Fu-lin; Zhou, Guang-sheng; Zhang, Feng; Wang, Feng-yu; Bao, Fang; Ping, Xiao-yan
2009-12-01
Based on the meteorological and biological observation data from the temperate desert steppe ecosystem research station in Sunitezuoqi of Inner Mongolia during growth season (from May 1st to October 15th, 2008), the diurnal and seasonal characteristics of surface albedo in the steppe were analyzed, with related model constructed. In the steppe, the diurnal variation of surface albedo was mainly affected by solar altitude, being higher just after sunrise and before sunset and lower in midday. During growth season, the surface albedo was from 0.20 to 0.34, with an average of 0.25, and was higher in May, decreased in June, kept relatively stable from July to September, and increased in October. This seasonal variation was related to the phenology of canopy leaf, and affected by precipitation process. Soil water content (SWC) and leaf area index (LAI) were the key factors affecting the surface albedo. A model for the surface albedo responding to SWC and LAI was developed, which showed a good performance in consistent between simulated and observed surface albedo.
Experimental evidence for beneficial effects of projected climate change on hibernating amphibians.
Üveges, Bálint; Mahr, Katharina; Szederkényi, Márk; Bókony, Veronika; Hoi, Herbert; Hettyey, Attila
2016-05-27
Amphibians are the most threatened vertebrates today, experiencing worldwide declines. In recent years considerable effort was invested in exposing the causes of these declines. Climate change has been identified as such a cause; however, the expectable effects of predicted milder, shorter winters on hibernation success of temperate-zone Amphibians have remained controversial, mainly due to a lack of controlled experimental studies. Here we present a laboratory experiment, testing the effects of simulated climate change on hibernating juvenile common toads (Bufo bufo). We simulated hibernation conditions by exposing toadlets to current (1.5 °C) or elevated (4.5 °C) hibernation temperatures in combination with current (91 days) or shortened (61 days) hibernation length. We found that a shorter winter and milder hibernation temperature increased survival of toads during hibernation. Furthermore, the increase in temperature and shortening of the cold period had a synergistic positive effect on body mass change during hibernation. Consequently, while climate change may pose severe challenges for amphibians of the temperate zone during their activity period, the negative effects may be dampened by shorter and milder winters experienced during hibernation.
A fast image simulation algorithm for scanning transmission electron microscopy.
Ophus, Colin
2017-01-01
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. We present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this method with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.
A fast image simulation algorithm for scanning transmission electron microscopy
Ophus, Colin
2017-05-10
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. Here, we present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this methodmore » with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.« less
Yu, Mei; Gao, Qiong
2011-01-01
Background and Aims The ability to simulate plant competition accurately is essential for plant functional type (PFT)-based models used in climate-change studies, yet gaps and uncertainties remain in our understanding of the details of the competition mechanisms and in ecosystem responses at a landscape level. This study examines secondary succession in a temperate deciduous forest in eastern China with the aim of determining if competition between tree types can be explained by differences in leaf ecophysiological traits and growth allometry, and whether ecophysiological traits and habitat spatial configurations among PFTs differentiate their responses to climate change. Methods A temperate deciduous broadleaved forest in eastern China was studied, containing two major vegetation types dominated by Quercus liaotungensis (OAK) and by birch/poplar (Betula platyphylla and Populus davidiana; BIP), respectively. The Terrestrial Ecosystem Simulator (TESim) suite of models was used to examine carbon and water dynamics using parameters measured at the site, and the model was evaluated against long-term data collected at the site. Key Results Simulations indicated that a higher assimilation rate for the BIP vegetation than OAK led to the former's dominance during early successional stages with relatively low competition. In middle/late succession with intensive competition for below-ground resources, BIP, with its lower drought tolerance/resistance and smaller allocation to leaves/roots, gave way to OAK. At landscape scale, predictions with increased temperature extrapolated from existing weather records resulted in increased average net primary productivity (NPP; +19 %), heterotrophic respiration (+23 %) and net ecosystem carbon balance (+17 %). The BIP vegetation in higher and cooler habitats showed 14 % greater sensitivity to increased temperature than the OAK at lower and warmer locations. Conclusions Drought tolerance/resistance and morphology-related allocation strategy (i.e. more allocation to leaves/roots) played key roles in the competition between the vegetation types. The overall site-average impacts of increased temperature on NPP and carbon stored in plants were found to be positive, despite negative effects of increased respiration and soil water stress, with such impacts being more significant for BIP located in higher and cooler habitats. PMID:21835816
NASA Astrophysics Data System (ADS)
Thomas, R. Q.; Williams, M.
2014-12-01
Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. Here we explore the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants using a new, simple model of ecosystem C-N cycling and interactions (ACONITE). ACONITE builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C:N, N fixation, and plant C use efficiency) based on the optimization of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state and transient ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C:N differed among the three ecosystem types (temperate deciduous < tropical evergreen < temperature evergreen), a result that compared well to observations from a global database describing plant traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C:N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C:N, while a more recently reported non-linear relationship simulated leaf C:N that compared better to the global trait database than the linear relationship. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C:N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on coupled C-N dynamics has potential for use in research that uses data-assimilation methods to integrate data on both the C and N cycles to improve C flux forecasts.
A Non-Intrusive Algorithm for Sensitivity Analysis of Chaotic Flow Simulations
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2017-01-01
We demonstrate a novel algorithm for computing the sensitivity of statistics in chaotic flow simulations to parameter perturbations. The algorithm is non-intrusive but requires exposing an interface. Based on the principle of shadowing in dynamical systems, this algorithm is designed to reduce the effect of the sampling error in computing sensitivity of statistics in chaotic simulations. We compare the effectiveness of this method to that of the conventional finite difference method.
Bio-inspired algorithms applied to molecular docking simulations.
Heberlé, G; de Azevedo, W F
2011-01-01
Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.
Hardness of H13 Tool Steel After Non-isothermal Tempering
NASA Astrophysics Data System (ADS)
Nelson, E.; Kohli, A.; Poirier, D. R.
2018-04-01
A direct method to calculate the tempering response of a tool steel (H13) that exhibits secondary hardening is presented. Based on the traditional method of presenting tempering response in terms of isothermal tempering, we show that the tempering response for a steel undergoing a non-isothermal tempering schedule can be predicted. Experiments comprised (1) isothermal tempering, (2) non-isothermal tempering pertaining to a relatively slow heating to process-temperature and (3) fast-heating cycles that are relevant to tempering by induction heating. After establishing the tempering response of the steel under simple isothermal conditions, the tempering response can be applied to non-isothermal tempering by using a numerical method to calculate the tempering parameter. Calculated results are verified by the experiments.
NASA Astrophysics Data System (ADS)
Li, Xuejian; Mao, Fangjie; Du, Huaqiang; Zhou, Guomo; Xu, Xiaojun; Han, Ning; Sun, Shaobo; Gao, Guolong; Chen, Liang
2017-04-01
Subtropical forest ecosystems play essential roles in the global carbon cycle and in carbon sequestration functions, which challenge the traditional understanding of the main functional areas of carbon sequestration in the temperate forests of Europe and America. The leaf area index (LAI) is an important biological parameter in the spatiotemporal simulation of the carbon cycle, and it has considerable significance in carbon cycle research. Dynamic retrieval based on remote sensing data is an important method with which to obtain large-scale high-accuracy assessments of LAI. This study developed an algorithm for assimilating LAI dynamics based on an integrated ensemble Kalman filter using MODIS LAI data, MODIS reflectance data, and canopy reflectance data modeled by PROSAIL, for three typical types of subtropical forest (Moso bamboo forest, Lei bamboo forest, and evergreen and deciduous broadleaf forest) in China during 2014-2015. There were some errors of assimilation in winter, because of the bad data quality of the MODIS product. Overall, the assimilated LAI well matched the observed LAI, with R2 of 0.82, 0.93, and 0.87, RMSE of 0.73, 0.49, and 0.42, and aBIAS of 0.50, 0.23, and 0.03 for Moso bamboo forest, Lei bamboo forest, and evergreen and deciduous broadleaf forest, respectively. The algorithm greatly decreased the uncertainty of the MODIS LAI in the growing season and it improved the accuracy of the MODIS LAI. The advantage of the algorithm is its use of biophysical parameters (e.g., measured LAI) in the LAI assimilation, which makes it possible to assimilate long-term MODIS LAI time series data, and to provide high-accuracy LAI data for the study of carbon cycle characteristics in subtropical forest ecosystems.
Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.
2012-01-01
Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.
NASA Astrophysics Data System (ADS)
Suh, Donghyuk; Radak, Brian K.; Chipot, Christophe; Roux, Benoît
2018-01-01
Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.
Suh, Donghyuk; Radak, Brian K; Chipot, Christophe; Roux, Benoît
2018-01-07
Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less
McGuire, A.D.; Melillo, J.M.; Randerson, J.T.; Parton, W.J.; Heimann, Martin; Meier, R.A.; Clein, Joy S.; Kicklighter, D.W.; Sauf, W.
2000-01-01
Simulations by global terrestrial biogeochemical models (TBMs) consistently underestimate the concentration of atmospheric carbon dioxide (CO2) at high latitude monitoring stations during the nongrowing season. We hypothesized that heterotrophic respiration is underestimated during the nongrowing season primarily because TBMs do not generally consider the insulative effects of snowpack on soil temperature. To evaluate this hypothesis, we compared the performance of baseline and modified versions of three TBMs in simulating the seasonal cycle of atmospheric CO2 at high latitude CO2 monitoring stations; the modified version maintained soil temperature at 0 ??C when modeled snowpack was present. The three TBMs include the Carnegie-Ames-Stanford Approach (CASA), Century, and the Terrestrial Ecosystem Model (TEM). In comparison with the baseline simulation of each model, the snowpack simulations caused higher releases of CO2 between November and March and greater uptake of CO2 between June and August for latitudes north of 30??N. We coupled the monthly estimates of CO2 exchange, the seasonal carbon dioxide flux fields generated by the HAMOCC3 seasonal ocean carbon cycle model, and fossil fuel source fields derived from standard sources to the three-dimensional atmospheric transport model TM2 forced by observed winds to simulate the seasonal cycle of atmospheric CO2 at each of seven high latitude monitoring stations, in comparison to the CO2 concentrations simulated with the baseline fluxes of each TBM, concentrations simulated using the snowpack fluxes are generally in better agreement with observed concentrations between August and March at each of the monitoring stations. Thus, representation of the insulative effects of snowpack in TBMs generally improves simulation of atmospheric CO2 concentrations in high latitudes during both the late growing season and nongrowing season. These simulations highlight the global importance of biogeochemical processes during the nongrowing season in estimating carbon balance of ecosystems in northern high and temperate latitudes.
An efficient Cellular Potts Model algorithm that forbids cell fragmentation
NASA Astrophysics Data System (ADS)
Durand, Marc; Guesnet, Etienne
2016-11-01
The Cellular Potts Model (CPM) is a lattice based modeling technique which is widely used for simulating cellular patterns such as foams or biological tissues. Despite its realism and generality, the standard Monte Carlo algorithm used in the scientific literature to evolve this model preserves connectivity of cells on a limited range of simulation temperature only. We present a new algorithm in which cell fragmentation is forbidden for all simulation temperatures. This allows to significantly enhance realism of the simulated patterns. It also increases the computational efficiency compared with the standard CPM algorithm even at same simulation temperature, thanks to the time spared in not doing unrealistic moves. Moreover, our algorithm restores the detailed balance equation, ensuring that the long-term stage is independent of the chosen acceptance rate and chosen path in the temperature space.
Ma, Jun; Hu, Yuanman; Bu, Rencang; Chang, Yu; Deng, Huawei; Qin, Qin
2014-01-01
The aboveground carbon sequestration rate (ACSR) reflects the influence of climate change on forest dynamics. To reveal the long-term effects of climate change on forest succession and carbon sequestration, a forest landscape succession and disturbance model (LANDIS Pro7.0) was used to simulate the ACSR of a temperate forest at the community and species levels in northeastern China based on both current and predicted climatic data. On the community level, the ACSR of mixed Korean pine hardwood forests and mixed larch hardwood forests, fluctuated during the entire simulation, while a large decline of ACSR emerged in interim of simulation in spruce-fir forest and aspen-white birch forests, respectively. On the species level, the ACSR of all conifers declined greatly around 2070s except for Korean pine. The ACSR of dominant hardwoods in the Lesser Khingan Mountains area, such as Manchurian ash, Amur cork, black elm, and ribbed birch fluctuated with broad ranges, respectively. Pioneer species experienced a sharp decline around 2080s, and they would finally disappear in the simulation. The differences of the ACSR among various climates were mainly identified in mixed Korean pine hardwood forests, in all conifers, and in a few hardwoods in the last quarter of simulation. These results indicate that climate warming can influence the ACSR in the Lesser Khingan Mountains area, and the largest impact commonly emerged in the A2 scenario. The ACSR of coniferous species experienced higher impact by climate change than that of deciduous species. PMID:24763409
Ma, Jun; Hu, Yuanman; Bu, Rencang; Chang, Yu; Deng, Huawei; Qin, Qin
2014-01-01
The aboveground carbon sequestration rate (ACSR) reflects the influence of climate change on forest dynamics. To reveal the long-term effects of climate change on forest succession and carbon sequestration, a forest landscape succession and disturbance model (LANDIS Pro7.0) was used to simulate the ACSR of a temperate forest at the community and species levels in northeastern China based on both current and predicted climatic data. On the community level, the ACSR of mixed Korean pine hardwood forests and mixed larch hardwood forests, fluctuated during the entire simulation, while a large decline of ACSR emerged in interim of simulation in spruce-fir forest and aspen-white birch forests, respectively. On the species level, the ACSR of all conifers declined greatly around 2070s except for Korean pine. The ACSR of dominant hardwoods in the Lesser Khingan Mountains area, such as Manchurian ash, Amur cork, black elm, and ribbed birch fluctuated with broad ranges, respectively. Pioneer species experienced a sharp decline around 2080s, and they would finally disappear in the simulation. The differences of the ACSR among various climates were mainly identified in mixed Korean pine hardwood forests, in all conifers, and in a few hardwoods in the last quarter of simulation. These results indicate that climate warming can influence the ACSR in the Lesser Khingan Mountains area, and the largest impact commonly emerged in the A2 scenario. The ACSR of coniferous species experienced higher impact by climate change than that of deciduous species.
Olson, Mark A
2018-01-22
Intrinsically disordered proteins are characterized by their large manifold of thermally accessible conformations and their related statistical weights, making them an interesting target of simulation studies. To assess the development of a computational framework for modeling this distinct class of proteins, this work examines temperature-based replica-exchange simulations to generate a conformational ensemble of a 28-residue peptide from the Ebola virus protein VP35. Starting from a prefolded helix-β-turn-helix topology observed in a crystallographic assembly, the simulation strategy tested is the recently refined CHARMM36m force field combined with a generalized Born solvent model. A comparison of two replica-exchange methods is provided, where one is a traditional approach with a fixed set of temperatures and the other is an adaptive scheme in which the thermal windows are allowed to move in temperature space. The assessment is further extended to include a comparison with equivalent CHARMM22 simulation data sets. The analysis finds CHARMM36m to shift the minimum in the potential of mean force (PMF) to a lower fractional helicity compared with CHARMM22, while the latter showed greater conformational plasticity along the helix-forming reaction coordinate. Among the simulation models, only the adaptive tempering method with CHARMM36m found an ensemble of conformational heterogeneity consisting of transitions between α-helix-β-hairpin folds and unstructured states that produced a PMF of fractional fold propensity in qualitative agreement with circular dichroism experiments reporting a disordered peptide.
Structure of aqueous proline via parallel tempering molecular dynamics and neutron diffraction.
Troitzsch, R Z; Martyna, G J; McLain, S E; Soper, A K; Crain, J
2007-07-19
The structure of aqueous L-proline amino acid has been the subject of much debate centering on the validity of various proposed models, differing widely in the extent to which local and long-range correlations are present. Here, aqueous proline is investigated by atomistic, replica exchange molecular dynamics simulations, and the results are compared to neutron diffraction and small angle neutron scattering (SANS) data, which have been reported recently (McLain, S.; Soper, A.; Terry, A.; Watts, A. J. Phys. Chem. B 2007, 111, 4568). Comparisons between neutron experiments and simulation are made via the static structure factor S(Q) which is measured and computed from several systems with different H/D isotopic compositions at a concentration of 1:20 molar ratio. Several different empirical water models (TIP3P, TIP4P, and SPC/E) in conjunction with the CHARMM22 force field are investigated. Agreement between experiment and simulation is reasonably good across the entire Q range although there are significant model-dependent variations in some cases. In general, agreement is improved slightly upon application of approximate quantum corrections obtained from gas-phase path integral simulations. Dimers and short oligomeric chains formed by hydrogen bonds (frequently bifurcated) coexist with apolar (hydrophobic) contacts. These emerge as the dominant local motifs in the mixture. Evidence for long-range association is more equivocal: No long-range structures form spontaneously in the MD simulations, and no obvious low-Q signature is seen in the SANS data. Moreover, associations introduced artificially to replicate a long-standing proposed mesoscale structure for proline correlations as an initial condition are annealed out by parallel tempering MD simulations. However, some small residual aggregates do remain, implying a greater degree of long-range order than is apparent in the SANS data.
Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery
NASA Astrophysics Data System (ADS)
Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.
2017-05-01
In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.
Discriminating between Light- and Heavy-Tailed Distributions with Limit Theorem.
Burnecki, Krzysztof; Wylomanska, Agnieszka; Chechkin, Aleksei
2015-01-01
In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov-Smirnov test. In particular, it helps to distinguish between stable and Student's t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition.
Discriminating between Light- and Heavy-Tailed Distributions with Limit Theorem
Chechkin, Aleksei
2015-01-01
In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov–Smirnov test. In particular, it helps to distinguish between stable and Student’s t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition. PMID:26698863
Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator
NASA Astrophysics Data System (ADS)
Rehmatullah, Faizan
In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.
NASA Technical Reports Server (NTRS)
Krosel, S. M.; Milner, E. J.
1982-01-01
The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.
Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2017-01-01
Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.
Exact simulation of max-stable processes.
Dombry, Clément; Engelke, Sebastian; Oesting, Marco
2016-06-01
Max-stable processes play an important role as models for spatial extreme events. Their complex structure as the pointwise maximum over an infinite number of random functions makes their simulation difficult. Algorithms based on finite approximations are often inexact and computationally inefficient. We present a new algorithm for exact simulation of a max-stable process at a finite number of locations. It relies on the idea of simulating only the extremal functions, that is, those functions in the construction of a max-stable process that effectively contribute to the pointwise maximum. We further generalize the algorithm by Dieker & Mikosch (2015) for Brown-Resnick processes and use it for exact simulation via the spectral measure. We study the complexity of both algorithms, prove that our new approach via extremal functions is always more efficient, and provide closed-form expressions for their implementation that cover most popular models for max-stable processes and multivariate extreme value distributions. For simulation on dense grids, an adaptive design of the extremal function algorithm is proposed.
A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Rao, Hariprasad Nannapaneni
1989-01-01
The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.
NASA Astrophysics Data System (ADS)
Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua
2015-07-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
Meerschaert, Mark M; Sabzikar, Farzad; Chen, Jinghua
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
MEERSCHAERT, MARK M.; SABZIKAR, FARZAD; CHEN, JINGHUA
2014-01-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series. PMID:26085690
Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman
2015-01-01
A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.
A parallel simulated annealing algorithm for standard cell placement on a hypercube computer
NASA Technical Reports Server (NTRS)
Jones, Mark Howard
1987-01-01
A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.
Carbon sequestration in managed temperate coniferous forests under climate change
NASA Astrophysics Data System (ADS)
Dymond, Caren C.; Beukema, Sarah; Nitschke, Craig R.; Coates, K. David; Scheller, Robert M.
2016-03-01
Management of temperate forests has the potential to increase carbon sinks and mitigate climate change. However, those opportunities may be confounded by negative climate change impacts. We therefore need a better understanding of climate change alterations to temperate forest carbon dynamics before developing mitigation strategies. The purpose of this project was to investigate the interactions of species composition, fire, management, and climate change in the Copper-Pine Creek valley, a temperate coniferous forest with a wide range of growing conditions. To do so, we used the LANDIS-II modelling framework including the new Forest Carbon Succession extension to simulate forest ecosystems under four different productivity scenarios, with and without climate change effects, until 2050. Significantly, the new extension allowed us to calculate the net sector productivity, a carbon accounting metric that integrates aboveground and belowground carbon dynamics, disturbances, and the eventual fate of forest products. The model output was validated against literature values. The results implied that the species optimum growing conditions relative to current and future conditions strongly influenced future carbon dynamics. Warmer growing conditions led to increased carbon sinks and storage in the colder and wetter ecoregions but not necessarily in the others. Climate change impacts varied among species and site conditions, and this indicates that both of these components need to be taken into account when considering climate change mitigation activities and adaptive management. The introduction of a new carbon indicator, net sector productivity, promises to be useful in assessing management effectiveness and mitigation activities.
Vectorized algorithms for spiking neural network simulation.
Brette, Romain; Goodman, Dan F M
2011-06-01
High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.
Developing a Learning Algorithm-Generated Empirical Relaxer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Wayne; Kallman, Josh; Toreja, Allen
2016-03-30
One of the main difficulties when running Arbitrary Lagrangian-Eulerian (ALE) simulations is determining how much to relax the mesh during the Eulerian step. This determination is currently made by the user on a simulation-by-simulation basis. We present a Learning Algorithm-Generated Empirical Relaxer (LAGER) which uses a regressive random forest algorithm to automate this decision process. We also demonstrate that LAGER successfully relaxes a variety of test problems, maintains simulation accuracy, and has the potential to significantly decrease both the person-hours and computational hours needed to run a successful ALE simulation.
Simulation of ICESat-2 canopy height retrievals for different ecosystems
NASA Astrophysics Data System (ADS)
Neuenschwander, A. L.
2016-12-01
Slated for launch in late 2017 (or early 2018), the ICESat-2 satellite will provide a global distribution of geodetic measurements from a space-based laser altimeter of both the terrain surface and relative canopy heights which will provide a significant benefit to society through a variety of applications ranging from improved global digital terrain models to producing distribution of above ground vegetation structure. The ATLAS instrument designed for ICESat-2, will utilize a different technology than what is found on most laser mapping systems. The photon counting technology of the ATLAS instrument onboard ICESat-2 will record the arrival time associated with a single photon detection. That detection can occur anywhere within the vertical distribution of the reflected signal, that is, anywhere within the vertical distribution of the canopy. This uncertainty of where the photon will be returned from within the vegetation layer is referred to as the vertical sampling error. Preliminary simulation studies to estimate vertical sampling error have been conducted for several ecosystems including woodland savanna, montane conifers, temperate hardwoods, tropical forest, and boreal forest. The results from these simulations indicate that the canopy heights reported on the ATL08 data product will underestimate the top canopy height in the range of 1 - 4 m. Although simulation results indicate the ICESat-2 will underestimate top canopy height, there is, however, a strong correlation between ICESat-2 heights and relative canopy height metrics (e.g. RH75, RH90). In tropical forest, simulation results indicate the ICESat-2 height correlates strongly with RH90. Similarly, in temperate broadleaf forest, the simulated ICESat-2 heights were also strongly correlated with RH90. In boreal forest, the simulated ICESat-2 heights are strongly correlated with RH75 heights. It is hypothesized that the correlations between simulated ICESat-2 heights and canopy height metrics are a function of both canopy cover and vegetation physiology (e.g. leaf size/shape) which contributes to the horizontal and vertical structure of the vegetation.
Evaluating long-term cumulative hydrologic effects of forest management: a conceptual approach
Robert R. Ziemer
1992-01-01
It is impractical to address experimentally many aspects of cumulative hydrologic effects, since to do so would require studying large watersheds for a century or more. Monte Carlo simulations were conducted using three hypothetical 10,000-ha fifth-order forested watersheds. Most of the physical processes expressed by the model are transferable from temperate to...
Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees
Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael
2014-01-01
Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis. PMID:24693210
LAWS simulation: Sampling strategies and wind computation algorithms
NASA Technical Reports Server (NTRS)
Emmitt, G. D. A.; Wood, S. A.; Houston, S. H.
1989-01-01
In general, work has continued on developing and evaluating algorithms designed to manage the Laser Atmospheric Wind Sounder (LAWS) lidar pulses and to compute the horizontal wind vectors from the line-of-sight (LOS) measurements. These efforts fall into three categories: Improvements to the shot management and multi-pair algorithms (SMA/MPA); observing system simulation experiments; and ground-based simulations of LAWS.
On the suitability of the connection machine for direct particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonard
1990-01-01
The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.
Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.
Higginson, J S; Neptune, R R; Anderson, F C
2005-09-01
Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.
Development of a Tool for an Efficient Calibration of CORSIM Models
DOT National Transportation Integrated Search
2014-08-01
This project proposes a Memetic Algorithm (MA) for the calibration of microscopic traffic flow simulation models. The proposed MA includes a combination of genetic and simulated annealing algorithms. The genetic algorithm performs the exploration of ...
Estimation of Nitrous Oxide Emissions from US Grasslands.
Mummey; Smith; Bluhm
2000-02-01
/ Nitrous oxide (N(2)O) emissions from temperate grasslands are poorly quantified and may be an important part of the atmospheric N(2)O budget. In this study N(2)O emissions were simulated for 1052 grassland sites in the United States using the NGAS model of Parton and others (1996) coupled with an organic matter decomposition model. N(2)O flux was calculated for each site using soil and land use data obtained from the National Resource Inventory (NRI) database and weather data obtained from NASA. The estimates were regionalized based upon temperature and moisture isotherms. Annual N(2)O emissions for each region were based on the grassland area of each region and the mean estimated annual N(2)O flux from NRI grassland sites in the region. The regional fluxes ranged from 0.18 to 1.02 kg N(2)O N/ha/yr with the mean flux for all regions being 0.28 kg N(2)O N/ha/yr. Even though fluxes from the western regions were relatively low, these regions made the largest contribution to total emissions due to their large grassland area. Total US grassland N(2)O emissions were estimated to be about 67 Gg N(2)O N/yr. Emissions from the Great Plains states, which contain the largest expanse of natural grassland in the United States, were estimated to average 0.24 kg N(2)O N/ha/yr. Using the annual flux estimate for the temperate Great Plains, we estimate that temperate grasslands worldwide may potentially produce 0.27 Tg N(2)O N/yr. Even though our estimate for global temperate grassland N(2)O emissions is less than published estimates for other major temperate and tropical biomes, our results indicate that temperate grasslands are a significant part of both United States and global atmospheric N(2)O budgets. This study demonstrates the utility of models for regional N(2)O flux estimation although additional data from carefully designed field studies is needed to further validate model results.
Brischoux, François; Dupoué, Andréaz; Lourdais, Olivier; Angelier, Frédéric
2016-02-01
Temperate ectotherms are expected to benefit from climate change (e.g., increased activity time), but the impacts of climate warming during the winter have mostly been overlooked. Milder winters are expected to decrease body condition upon emergence, and thus to affect crucial life-history traits, such as survival and reproduction. Mild winter temperature could also trigger a state of chronic physiological stress due to inadequate thermal conditions that preclude both dormancy and activity. We tested these hypotheses on a typical temperate ectothermic vertebrate, the aspic viper (Vipera aspis). We simulated different wintering conditions for three groups of aspic vipers (cold: ~6 °C, mild: ~14 °C and no wintering: ~24 °C) during a one month long period. We found that mild wintering conditions induced a marked decrease in body condition, and provoked an alteration of some hormonal mechanisms involved in emergence. Such effects are likely to bear ultimate consequences on reproduction, and thus population persistence. We emphasize that future studies should incorporate the critical, albeit neglected, winter season when assessing the potential impacts of global changes on ectotherms. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabzikar, Farzad, E-mail: sabzika2@stt.msu.edu; Meerschaert, Mark M., E-mail: mcubed@stt.msu.edu; Chen, Jinghua, E-mail: cjhdzdz@163.com
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a temperedmore » fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.« less
Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
NASA Astrophysics Data System (ADS)
Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan
2014-03-01
We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.
NASA Astrophysics Data System (ADS)
Thomas, R. Q.; Williams, M.
2014-04-01
Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. However there is little understanding of the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants. Here we describe a new, simple model of ecosystem C-N cycling and interactions (ACONITE), that builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C : N, N fixation, and plant C use efficiency) using emergent constraints provided by marginal returns on investment for C and/or N allocation. We simulated and evaluated steady-state ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C : N differed among the three ecosystem types (temperate deciduous < tropical evergreen < temperature evergreen), a result that compared well to observations from a global database describing plant traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. Simulated N fixation at steady-state, calculated based on relative demand for N and the marginal return on C investment to acquire N, was an order of magnitude higher in the tropical forest than in the temperate forest, consistent with observations. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C : N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C : N, while a more recently reported non-linear relationship performed better. A parameter governing how photosynthesis scales with day length had the largest influence on total vegetation C, GPP, and NPP. Multiple parameters associated with photosynthesis, respiration, and N uptake influenced the rate of N fixation. Overall, our ability to constrain leaf area index and have spatially and temporally variable leaf C : N helps address challenges for ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on coupled C-N dynamics has potential for use in research that uses data-assimilation methods to integrate data on both the C and N cycles to improve C flux forecasts.
Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik
2015-06-09
Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.
Optical simulation of quantum algorithms using programmable liquid-crystal displays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puentes, Graciana; La Mela, Cecilia; Ledesma, Silvia
2004-04-01
We present a scheme to perform an all optical simulation of quantum algorithms and maps. The main components are lenses to efficiently implement the Fourier transform and programmable liquid-crystal displays to introduce space dependent phase changes on a classical optical beam. We show how to simulate Deutsch-Jozsa and Grover's quantum algorithms using essentially the same optical array programmed in two different ways.
A joint equalization algorithm in high speed communication systems
NASA Astrophysics Data System (ADS)
Hao, Xin; Lin, Changxing; Wang, Zhaohui; Cheng, Binbin; Deng, Xianjin
2018-02-01
This paper presents a joint equalization algorithm in high speed communication systems. This algorithm takes the advantages of traditional equalization algorithms to use pre-equalization and post-equalization. The pre-equalization algorithm takes the advantage of CMA algorithm, which is not sensitive to the frequency offset. Pre-equalization is located before the carrier recovery loop in order to make the carrier recovery loop a better performance and overcome most of the frequency offset. The post-equalization takes the advantage of MMA algorithm in order to overcome the residual frequency offset. This paper analyzes the advantages and disadvantages of several equalization algorithms in the first place, and then simulates the proposed joint equalization algorithm in Matlab platform. The simulation results shows the constellation diagrams and the bit error rate curve, both these results show that the proposed joint equalization algorithm is better than the traditional algorithms. The residual frequency offset is shown directly in the constellation diagrams. When SNR is 14dB, the bit error rate of the simulated system with the proposed joint equalization algorithm is 103 times better than CMA algorithm, 77 times better than MMA equalization, and 9 times better than CMA-MMA equalization.
Vehicle routing problem with time windows using natural inspired algorithms
NASA Astrophysics Data System (ADS)
Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.
2018-03-01
Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.
Two-Photon Excitation STED Microscopy with Time-Gated Detection
Coto Hernández, Iván; Castello, Marco; Lanzanò, Luca; d’Amora, Marta; Bianchini, Paolo; Diaspro, Alberto; Vicidomini, Giuseppe
2016-01-01
We report on a novel two-photon excitation stimulated emission depletion (2PE-STED) microscope based on time-gated detection. The time-gated detection allows for the effective silencing of the fluorophores using moderate stimulated emission beam intensity. This opens the possibility of implementing an efficient 2PE-STED microscope with a stimulated emission beam running in a continuous-wave. The continuous-wave stimulated emission beam tempers the laser architecture’s complexity and cost, but the time-gated detection degrades the signal-to-noise ratio (SNR) and signal-to-background ratio (SBR) of the image. We recover the SNR and the SBR through a multi-image deconvolution algorithm. Indeed, the algorithm simultaneously reassigns early-photons (normally discarded by the time-gated detection) to their original positions and removes the background induced by the stimulated emission beam. We exemplify the benefits of this implementation by imaging sub-cellular structures. Finally, we discuss of the extension of this algorithm to future all-pulsed 2PE-STED implementationd based on time-gated detection and a nanosecond laser source. PMID:26757892
Improved pulse laser ranging algorithm based on high speed sampling
NASA Astrophysics Data System (ADS)
Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang
2016-10-01
Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.
Bernal, Javier; Torres-Jimenez, Jose
2015-01-01
SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.
Loading relativistic Maxwell distributions in particle simulations
NASA Astrophysics Data System (ADS)
Zenitani, S.
2015-12-01
In order to study energetic plasma phenomena by using particle-in-cell (PIC) and Monte-Carlo simulations, we need to deal with relativistic velocity distributions in these simulations. However, numerical algorithms to deal with relativistic distributions are not well known. In this contribution, we overview basic algorithms to load relativistic Maxwell distributions in PIC and Monte-Carlo simulations. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are newly proposed in a physically transparent manner. Their acceptance efficiencies are 50% for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Bui, Trong T.
1999-01-01
A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.
Impacts of climate change on paddy rice yield in a temperate climate.
Kim, Han-Yong; Ko, Jonghan; Kang, Suchel; Tenhunen, John
2013-02-01
The crop simulation model is a suitable tool for evaluating the potential impacts of climate change on crop production and on the environment. This study investigates the effects of climate change on paddy rice production in the temperate climate regions under the East Asian monsoon system using the CERES-Rice 4.0 crop simulation model. This model was first calibrated and validated for crop production under elevated CO2 and various temperature conditions. Data were obtained from experiments performed using a temperature gradient field chamber (TGFC) with a CO2 enrichment system installed at Chonnam National University in Gwangju, Korea in 2009 and 2010. Based on the empirical calibration and validation, the model was applied to deliver a simulated forecast of paddy rice production for the region, as well as for the other Japonica rice growing regions in East Asia, projecting for years 2050 and 2100. In these climate change projection simulations in Gwangju, Korea, the yield increases (+12.6 and + 22.0%) due to CO2 elevation were adjusted according to temperature increases showing variation dependent upon the cultivars, which resulted in significant yield decreases (-22.1% and -35.0%). The projected yields were determined to increase as latitude increases due to reduced temperature effects, showing the highest increase for any of the study locations (+24%) in Harbin, China. It appears that the potential negative impact on crop production may be mediated by appropriate cultivar selection and cultivation changes such as alteration of the planting date. Results reported in this study using the CERES-Rice 4.0 model demonstrate the promising potential for its further application in simulating the impacts of climate change on rice production from a local to a regional scale under the monsoon climate system. © 2012 Blackwell Publishing Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Anthony P; Hanson, Paul J; DeKauwe, Martin G
2014-01-01
Free Air CO2 Enrichment (FACE) experiments provide a remarkable wealth of data to test the sensitivities of terrestrial ecosystem models (TEMs). In this study, a broad set of 11 TEMs were compared to 22 years of data from two contrasting FACE experiments in temperate forests of the south eastern US the evergreen Duke Forest and the deciduous Oak Ridge forest. We evaluated the models' ability to reproduce observed net primary productivity (NPP), transpiration and Leaf Area index (LAI) in ambient CO2 treatments. Encouragingly, many models simulated annual NPP and transpiration within observed uncertainty. Daily transpiration model errors were often relatedmore » to errors in leaf area phenology and peak LAI. Our analysis demonstrates that the simulation of LAI often drives the simulation of transpiration and hence there is a need to adopt the most appropriate of hypothesis driven methods to simulate and predict LAI. Of the three competing hypotheses determining peak LAI (1) optimisation to maximise carbon export, (2) increasing SLA with canopy depth and (3) the pipe model the pipe model produced LAI closest to the observations. Modelled phenology was either prescribed or based on broader empirical calibrations to climate. In some cases, simulation accuracy was achieved through compensating biases in component variables. For example, NPP accuracy was sometimes achieved with counter-balancing biases in nitrogen use efficiency and nitrogen uptake. Combined analysis of parallel measurements aides the identification of offsetting biases; without which over-confidence in model abilities to predict ecosystem function may emerge, potentially leading to erroneous predictions of change under future climates.« less
Coslovich, Daniele; Ozawa, Misaki; Kob, Walter
2018-05-17
The physical behavior of glass-forming liquids presents complex features of both dynamic and thermodynamic nature. Some studies indicate the presence of thermodynamic anomalies and of crossovers in the dynamic properties, but their origin and degree of universality is difficult to assess. Moreover, conventional simulations are barely able to cover the range of temperatures at which these crossovers usually occur. To address these issues, we simulate the Kob-Andersen Lennard-Jones mixture using efficient protocols based on multi-CPU and multi-GPU parallel tempering. Our setup enables us to probe the thermodynamics and dynamics of the liquid at equilibrium well below the critical temperature of the mode-coupling theory, [Formula: see text]. We find that below [Formula: see text] the analysis is hampered by partial crystallization of the metastable liquid, which nucleates extended regions populated by large particles arranged in an fcc structure. By filtering out crystalline samples, we reveal that the specific heat grows in a regular manner down to [Formula: see text] . Possible thermodynamic anomalies suggested by previous studies can thus occur only in a region of the phase diagram where the system is highly metastable. Using the equilibrium configurations obtained from the parallel tempering simulations, we perform molecular dynamics and Monte Carlo simulations to probe the equilibrium dynamics down to [Formula: see text]. A temperature-derivative analysis of the relaxation time and diffusion data allows us to assess different dynamic scenarios around [Formula: see text]. Hints of a dynamic crossover come from analysis of the four-point dynamic susceptibility. Finally, we discuss possible future numerical strategies to clarify the nature of crossover phenomena in glass-forming liquids.
Kasper, Joseph M; Williams-Young, David B; Vecharynski, Eugene; Yang, Chao; Li, Xiaosong
2018-04-10
The time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) equations allow one to probe electronic resonances of a system quickly and inexpensively. However, the iterative solution of the eigenvalue problem can be challenging or impossible to converge, using standard methods such as the Davidson algorithm for spectrally dense regions in the interior of the spectrum, as are common in X-ray absorption spectroscopy (XAS). More robust solvers, such as the generalized preconditioned locally harmonic residual (GPLHR) method, can alleviate this problem, but at the expense of higher average computational cost. A hybrid method is proposed which adapts to the problem in order to maximize computational performance while providing the superior convergence of GPLHR. In addition, a modification to the GPLHR algorithm is proposed to adaptively choose the shift parameter to enforce a convergence of states above a predefined energy threshold.
NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.
Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C
2011-09-14
An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics
Hu, Zhongmin; Shi, Hao; Cheng, Kaili; Wang, Ying-Ping; Piao, Shilong; Li, Yue; Zhang, Li; Xia, Jianyang; Zhou, Lei; Yuan, Wenping; Running, Steve; Li, Longhui; Hao, Yanbin; He, Nianpeng; Yu, Qiang; Yu, Guirui
2018-04-17
Given the important contributions of semiarid region to global land carbon cycle, accurate modeling of the interannual variability (IAV) of terrestrial gross primary productivity (GPP) is important but remains challenging. By decomposing GPP into leaf area index (LAI) and photosynthesis per leaf area (i.e., GPP_leaf), we investigated the IAV of GPP and the mechanisms responsible in a temperate grassland of northwestern China. We further assessed six ecosystem models for their capabilities in reproducing the observed IAV of GPP in a temperate grassland from 2004 to 2011 in China. We observed that the responses to LAI and GPP_leaf to soil water significantly contributed to IAV of GPP at the grassland ecosystem. Two of six models with prescribed LAI simulated of the observed IAV of GPP quite well, but still underestimated the variance of GPP_leaf, therefore the variance of GPP. In comparison, simulated pattern by the other four models with prognostic LAI differed significantly from the observed IAV of GPP. Only some models with prognostic LAI can capture the observed sharp decline of GPP in drought years. Further analysis indicated that accurately representing the responses of GPP_leaf and leaf stomatal conductance to soil moisture are critical for the models to reproduce the observed IAV of GPP_leaf. Our framework also identified that the contributions of LAI and GPP_leaf to the observed IAV of GPP were relatively independent. We conclude that our framework of decomposing GPP into LAI and GPP_leaf has a significant potential for facilitating future model intercomparison, benchmarking and optimization should be adopted for future data-model comparisons. © 2018 John Wiley & Sons Ltd.
Synchrotron x-ray microtomography of the interior microstructure of chocolate
NASA Astrophysics Data System (ADS)
Lügger, Svenja K.; Wilde, Fabian; Dülger, Nihan; Reinke, Lennart M.; Kozhar, Sergii; Beckmann, Felix; Greving, Imke; Vieira, Josélio; Heinrich, Stefan; Palzer, Stefan
2016-10-01
The structure of chocolate, a multicomponent food product, was analyzed using microtomography. Chocolate consists of a semi-solid cocoa butter matrix and a dense network of suspended particles. A detailed analysis of the microstructure is needed to understand mass transport phenomena. Transport of lipids from e.g. a filling or liquid cocoa butter is responsible for major problems in the confectionery industry such as formation of chocolate bloom, which is the formation of visible white spots or a grayish haze on the chocolate surface and leads to consumer rejections and thus large sales losses for the confectionery industry. In this study it was possible to visualize the inner structure of chocolate and clearly distinguish the particles from the continuous phase by taking advantage of the high density contrast of synchrotron radiation. Consequently, particle arrangement and cracks within the sample were made visible. The cracks are several micrometers thick and propagate throughout the entire sample. Images of pure cocoa butter, chocolate without any particles, did not show any cracks and thus confirmed that cracks are a result of embedded particles. They arise during the manufacturing process. Thus, the solidification process, a critical manufacturing step, was simulated with finite element methods in order to understand crack formation during this step. The simulation showed that cracks arise because of significant contraction of cocoa butter, the matrix phase, without any major change of volume of the suspended particles. Tempering of the chocolate mass prior to solidification is another critical step for a good product quality. We found that samples which solidified in an uncontrolled manner are less homogeneous than tempered samples. In summary, our study visualized for the first time the inner microstructure of tempered and untempered cocoa butter as well as chocolate without sample destruction and revealed cracks, which might act as transport pathways.
Kan, Zigui; Zhu, Qiang; Yang, Lijiang; Huang, Zhixiong; Jin, Biaobing; Ma, Jing
2017-05-04
Conformation of cellulose with various degree of polymerization of n = 1-12 in ionic liquid 1,3-dimethylimidazolium chloride ([C 1 mim]Cl) and the intermolecular interaction between them was studied by means of molecular dynamics (MD) simulations with fixed-charge and charge variable polarizable force fields, respectively. The integrated tempering enhanced sampling method was also employed in the simulations in order to improve the sampling efficiency. Cellulose undergoes significant conformational changes from a gaseous right-hand helical twist along the long axis to a flexible conformation in ionic liquid. The intermolecular interactions between cellulose and ionic liquid were studied by both infrared spectrum measurements and theoretical simulations. Designated by their puckering parameters, the pyranose rings of cellulose oligomers are mainly arranged in a chair conformation. With the increase in the degree of polymerization of cellulose, the boat and skew-boat conformations of cellulose appear in the MD simulations, especially in the simulations with polarization model. The number and population of hydrogen bonds between the cellulose and the chloride anions show that chloride anion is prone to form HBs whenever it approaches the hydroxyl groups of cellulose and, thus, each hydroxyl group is fully hydrogen bonded to the chloride anion. MD simulations with polarization model presented more abundant conformations than that with nonpolarization model. The application of the enhanced sampling method further enlarged the conformational spaces that could be visited by facilitating the system escaping from the local minima. It was found that the electrostatics interactions between the cellulose and ionic liquid contribute more to the total interaction energies than the van der Waals interactions. Although the interaction energy between the cellulose and anion is about 2.9 times that between the cellulose and cation, the role of cation is non-negligible. In contrast, the interaction energy between the cellulose and water is too weak to dissolve cellulose in water.
Wang, Hailong; Sun, Yuqiu; Su, Qinghua; Xia, Xuewen
2018-01-01
The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. PMID:29666635
Simulating an underwater vehicle self-correcting guidance system with Simulink
NASA Astrophysics Data System (ADS)
Fan, Hui; Zhang, Yu-Wen; Li, Wen-Zhe
2008-09-01
Underwater vehicles have already adopted self-correcting directional guidance algorithms based on multi-beam self-guidance systems, not waiting for research to determine the most effective algorithms. The main challenges facing research on these guidance systems have been effective modeling of the guidance algorithm and a means to analyze the simulation results. A simulation structure based on Simulink that dealt with both issues was proposed. Initially, a mathematical model of relative motion between the vehicle and the target was developed, which was then encapsulated as a subsystem. Next, steps for constructing a model of the self-correcting guidance algorithm based on the Stateflow module were examined in detail. Finally, a 3-D model of the vehicle and target was created in VRML, and by processing mathematical results, the model was shown moving in a visual environment. This process gives more intuitive results for analyzing the simulation. The results showed that the simulation structure performs well. The simulation program heavily used modularization and encapsulation, so has broad applicability to simulations of other dynamic systems.
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
Vanadium Microalloyed High Strength Martensitic Steel Sheet for Hot-Dip Coating
NASA Astrophysics Data System (ADS)
Hutchinson, Bevis; Komenda, Jacek; Martin, David
Cold rolled steels with various vanadium and nitrogen levels have been treated to simulate the application of galvanizing and galvannealing to hardened martensitic microstructures. Strength levels were raised 100-150MPa by alloying with vanadium, which mitigates the effect of tempering. This opens the way for new ultra-high strength steels with corrosion resistant coatings produced by hot dip galvanising.
Simulated impacts of insect defoliation on forest carbon dynamics
D. Medvigy; K.L. Clark; N.S. Skowronski; K.V.R. Schäfer
2012-01-01
Many temperate and boreal forests are subject to insect epidemics. In the eastern US, over 41 million meters squared of tree basal area are thought to be at risk of gypsy moth defoliation. However, the decadal-to-century scale implications of defoliation events for ecosystem carbon dynamics are not well understood. In this study, the effects of defoliation intensity,...
Wen J. Wang; Hong S. He; Frank R. Thompson; Jacob S. Fraser; Brice B. Hanberry; William D. Dijak
2015-01-01
Most temperate forests in U.S. are recovering from heavy exploitation and are in intermediate successional stages where partial tree harvest is the primary disturbance. Changes in regional forest composition in response to climate change are often predicted for plant functional types using biophysical process models. These models usually simplify the simulation of...
Robert S. Ahl; Scott W. Woods; Hans R. Zuuring
2008-01-01
The Soil and Water Assessment Tool (SWAT) has been applied successfully in temperate environments but little is known about its performance in the snow-dominated, forested, mountainous watersheds that provide much of the water supply in western North America. To address this knowledge gap, we configured SWAT to simulate the streamflow of Tenderfoot Creek (TCSWAT)....
Douglas J. Shinneman; Brian J. Palik; Meredith W. Cornett
2012-01-01
Management strategies to restore forest landscapes are often designed to concurrently reduce fire risk. However, the compatibility of these two objectives is not always clear, and uncoordinated management among landowners may have unintended consequences. We used a forest landscape simulation model to compare the effects of contemporary management and hypothetical...
Zhao, Dongsheng; Wu, Shaohong; Yin, Yunhe
2013-01-01
The impact of regional climate change on net primary productivity (NPP) is an important aspect in the study of ecosystems’ response to global climate change. China’s ecosystems are very sensitive to climate change owing to the influence of the East Asian monsoon. The Lund–Potsdam–Jena Dynamic Global Vegetation Model for China (LPJ-CN), a global dynamical vegetation model developed for China’s terrestrial ecosystems, was applied in this study to simulate the NPP changes affected by future climate change. As the LPJ-CN model is based on natural vegetation, the simulation in this study did not consider the influence of anthropogenic activities. Results suggest that future climate change would have adverse effects on natural ecosystems, with NPP tending to decrease in eastern China, particularly in the temperate and warm temperate regions. NPP would increase in western China, with a concentration in the Tibetan Plateau and the northwest arid regions. The increasing trend in NPP in western China and the decreasing trend in eastern China would be further enhanced by the warming climate. The spatial distribution of NPP, which declines from the southeast coast to the northwest inland, would have minimal variation under scenarios of climate change. PMID:23593325
Zhao, Dongsheng; Wu, Shaohong; Yin, Yunhe
2013-01-01
The impact of regional climate change on net primary productivity (NPP) is an important aspect in the study of ecosystems' response to global climate change. China's ecosystems are very sensitive to climate change owing to the influence of the East Asian monsoon. The Lund-Potsdam-Jena Dynamic Global Vegetation Model for China (LPJ-CN), a global dynamical vegetation model developed for China's terrestrial ecosystems, was applied in this study to simulate the NPP changes affected by future climate change. As the LPJ-CN model is based on natural vegetation, the simulation in this study did not consider the influence of anthropogenic activities. Results suggest that future climate change would have adverse effects on natural ecosystems, with NPP tending to decrease in eastern China, particularly in the temperate and warm temperate regions. NPP would increase in western China, with a concentration in the Tibetan Plateau and the northwest arid regions. The increasing trend in NPP in western China and the decreasing trend in eastern China would be further enhanced by the warming climate. The spatial distribution of NPP, which declines from the southeast coast to the northwest inland, would have minimal variation under scenarios of climate change.
Henricksen, Jared W; Altenburg, Catherine; Reeder, Ron W
2017-10-01
Despite efforts to prepare a psychologically safe environment, simulation participants are occasionally psychologically distressed. Instructing simulation educators about participant psychological risks and having a participant psychological distress action plan available to simulation educators may assist them as they seek to keep all participants psychologically safe. A Simulation Participant Psychological Safety Algorithm was designed to aid simulation educators as they debrief simulation participants perceived to have psychological distress and categorize these events as mild (level 1), moderate (level 2), or severe (level 3). A prebrief dedicated to creating a psychologically safe learning environment was held constant. The algorithm was used for 18 months in an active pediatric simulation program. Data collected included level of participant psychological distress as perceived and categorized by the simulation team using the algorithm, type of simulation that participants went through, who debriefed, and timing of when psychological distress was perceived to occur during the simulation session. The Kruskal-Wallis test was used to evaluate the relationship between events and simulation type, events and simulation educator team who debriefed, and timing of event during the simulation session. A total of 3900 participants went through 399 simulation sessions between August 1, 2014, and January 26, 2016. Thirty-four (<1%) simulation participants from 27 sessions (7%) were perceived to have an event. One participant was perceived to have a severe (level 3) psychological distress event. Events occurred more commonly in high-intensity simulations, with novice learners and with specific educator teams. Simulation type and simulation educator team were associated with occurrence of events (P < 0.001). There was no association between event timing and event level. Severe psychological distress as categorized by simulation personnel using the Simulation Participant Psychological Safety Algorithm is rare, with mild and moderate events being more common. The algorithm was used to teach simulation educators how to assist a participant who may be psychologically distressed and document perceived event severity.
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Deep learning algorithms for detecting explosive hazards in ground penetrating radar data
NASA Astrophysics Data System (ADS)
Besaw, Lance E.; Stimac, Philip J.
2014-05-01
Buried explosive hazards (BEHs) have been, and continue to be, one of the most deadly threats in modern conflicts. Current handheld sensors rely on a highly trained operator for them to be effective in detecting BEHs. New algorithms are needed to reduce the burden on the operator and improve the performance of handheld BEH detectors. Traditional anomaly detection and discrimination algorithms use "hand-engineered" feature extraction techniques to characterize and classify threats. In this work we use a Deep Belief Network (DBN) to transcend the traditional approaches of BEH detection (e.g., principal component analysis and real-time novelty detection techniques). DBNs are pretrained using an unsupervised learning algorithm to generate compressed representations of unlabeled input data and form feature detectors. They are then fine-tuned using a supervised learning algorithm to form a predictive model. Using ground penetrating radar (GPR) data collected by a robotic cart swinging a handheld detector, our research demonstrates that relatively small DBNs can learn to model GPR background signals and detect BEHs with an acceptable false alarm rate (FAR). In this work, our DBNs achieved 91% probability of detection (Pd) with 1.4 false alarms per square meter when evaluated on anti-tank and anti-personnel targets at temperate and arid test sites. This research demonstrates that DBNs are a viable approach to detect and classify BEHs.
NASA Astrophysics Data System (ADS)
Grant, R. F.; Barr, A.; Black, T. A.; Margolis, H. A.; McCaughey, J. H.; Trofymow, J. A.
2010-05-01
Clearcutting strongly affects subsequent forest net ecosystem productivity (NEP). Hypotheses for ecological controls on NEP in the ecosystem model ecosys were tested with CO2 fluxes measured by eddy covariance (EC) in three post-clearcut conifer chronosequences. An algorithm for microbial colonization of fine and woody debris allowed the model to reproduce sigmoidal declines in debris observed after clearcutting. In the model, Rh drove debris decomposition that drove microbial growth, N mineralization and asymbiotic N2 fixation. These processes controlled root N uptake, and thereby CO2 fixation in regrowing vegetation. Interactions among soil and plant processes allowed the model to simulate hourly CO2 fluxes and annual NEP within the uncertainty of EC measurements from 2003 through 2007 over forest stands from 1 to 80 years of age in all three chronosequences without site- or species-specific parameterization. The model was then used to study the impacts of increasing harvest removals on subsequent C stocks at one of the chronosequence sites. Model results indicated that increasing harvest removals would hasten recovery of NEP during the first 30 years after clearcutting, but would reduce ecosystem C stocks by about 15% of the increased removals at the end of an 80 year harvest cycle.
A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns
NASA Astrophysics Data System (ADS)
Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng
2009-11-01
Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
Wu, Sheng; Li, Hong; Petzold, Linda R.
2015-01-01
The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy. PMID:26609185
Luo, Di; Mu, Yuguang
2016-06-09
G-quadruplex is a noncanonical yet crucial secondary structure of nucleic acids, which has proven its importance in cell aging, anticancer therapies, gene expression, and genome stability. In this study, the stability and folding dynamics of human telomeric DNA G-quadruplexes were investigated via enhanced sampling techniques. First, temperature-replica exchange MD (REMD) simulations were employed to compare the thermal stabilities among the five established folding topologies. The hybrid-2 type adopted by extended human telomeric sequence is revealed to be the most stable conformation in our simulations. Next, the free energy landscapes and folding intermediates of the hybrid-1 and -2 types were investigated with parallel tempering metadynamics simulations in the well-tempered ensemble. It was observed that the N-glycosidic conformations of guanines can flip over to accommodate into the cyclic Hoogsteen H-bonding on G-tetrads in which they were not originally involved. Furthermore, a hairpin and a triplex intermediate were identified for the folding of the hybrid-1 type conformation, whereas for the hybrid-2 type, there were no folding intermediates observed from its free energy surface. However, the energy barrier from its native topology to the transition structure is found to be extremely high compared to that of the hybrid-1 type, which is consistent with our stability predictions from the REMD simulations. We hope the insights presented in this work can help to complement current understanding on the stability and dynamics of G-quadruplexes, which is necessary not only to stabilize the structures but also to intervene their formation in genome.
Pivot method for global optimization: A study of structures and phase changes in water clusters
NASA Astrophysics Data System (ADS)
Nigra, Pablo Fernando
In this thesis, we have carried out a study of water clusters. The research work has been developed in two stages. In the first stage, we have investigated the properties of water clusters at zero temperature by means of global optimization. The clusters were modeled by using two well known pairwise potentials having distinct characteristics. One is the Matsuoka-Clementi-Yoshimine potential (MCY) that is an ab initio fitted function based on a rigid-molecule model, the other is the Sillinger-Rahman potential (SR) which is an empirical function based on a flexible-molecule model. The algorithm used for the global optimization of the clusters was the pivot method, which was developed in our group. The results have shown that, under certain conditions, the pivot method may yield optimized structures which are related to one another in such a way that they seem to form structural families. The structures in a family can be thought of as formed from the aggregation of single units. The particular types of structures we have found are quasi-one dimensional tubes built from stacking cyclic units such as tetramers, pentamers, and hexamers. The binding energies of these tubes form sequences that span smooth curves with clear asymptotic behavior; therefore, we have also studied the sequences applying the Bulirsch-Stoer (BST) algorithm to accelerate convergence. In the second stage of the research work, we have studied the thermodynamic properties of a typical water cluster at finite temperatures. The selected cluster was the water octamer which exhibits a definite solid-liquid phase change. The water octamer also has several low lying energy cubic structures with large energetic barriers that cause ergodicity breaking in regular Monte Carlo simulations. For that reason we have simulated the octamer using paralell tempering Monte Carlo combined with the multihistogram method. This has permited us to calculate the heat capacity from very low temperatures up to T = 230 K. We have found the melting temperature to be 178.5 K. In addition, we have been able to estimate at 12 K the onset temperature of a solid-solid phase change between the two lowest energy lying isomers.
Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem
Akutsah, Francis; Olusanya, Micheal O.; Adewumi, Aderemi O.
2018-01-01
The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems. PMID:29554662
Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem.
Ezugwu, Absalom E; Akutsah, Francis; Olusanya, Micheal O; Adewumi, Aderemi O
2018-01-01
The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems.
Simulated bi-SQUID Arrays Performing Direction Finding
2015-09-01
First, we applied the multiple signal classification ( MUSIC ) algorithm on linearly polarized signals. We included multiple signals in the output...both of the same frequency and different fre- quencies. Next, we explored a modified MUSIC algorithm called dimensionality reduction MUSIC (DR- MUSIC ... MUSIC algorithm is able to determine the AoA from the simulated SQUID data for linearly polarized signals. The MUSIC algorithm could accurately find
Comparison of optimization algorithms in intensity-modulated radiation therapy planning
NASA Astrophysics Data System (ADS)
Kendrick, Rachel
Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.
Du, Tingsong; Hu, Yang; Ke, Xianting
2015-01-01
An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA.
A Toolbox to Improve Algorithms for Insulin-Dosing Decision Support
Donsa, K.; Plank, J.; Schaupp, L.; Mader, J. K.; Truskaller, T.; Tschapeller, B.; Höll, B.; Spat, S.; Pieber, T. R.
2014-01-01
Summary Background Standardized insulin order sets for subcutaneous basal-bolus insulin therapy are recommended by clinical guidelines for the inpatient management of diabetes. The algorithm based GlucoTab system electronically assists health care personnel by supporting clinical workflow and providing insulin-dose suggestions. Objective To develop a toolbox for improving clinical decision-support algorithms. Methods The toolbox has three main components. 1) Data preparation: Data from several heterogeneous sources is extracted, cleaned and stored in a uniform data format. 2) Simulation: The effects of algorithm modifications are estimated by simulating treatment workflows based on real data from clinical trials. 3) Analysis: Algorithm performance is measured, analyzed and simulated by using data from three clinical trials with a total of 166 patients. Results Use of the toolbox led to algorithm improvements as well as the detection of potential individualized subgroup-specific algorithms. Conclusion These results are a first step towards individualized algorithm modifications for specific patient subgroups. PMID:25024768
Hu, Yang; Ke, Xianting
2015-01-01
An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713
Crashworthiness simulations with DYNA3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauer, D.A.; Hoover, C.G.; Kay, G.J.
1996-04-01
Current progress in parallel algorithm research and applications in vehicle crash simulation is described for the explicit, finite element algorithms in DYNA3D. Problem partitioning methods and parallel algorithms for contact at material interfaces are the two challenging algorithm research problems that are addressed. Two prototype parallel contact algorithms have been developed for treating the cases of local and arbitrary contact. Demonstration problems for local contact are crashworthiness simulations with 222 locally defined contact surfaces and a vehicle/barrier collision modeled with arbitrary contact. A simulation of crash tests conducted for a vehicle impacting a U-channel small sign post embedded in soilmore » has been run on both the serial and parallel versions of DYNA3D. A significant reduction in computational time has been observed when running these problems on the parallel version. However, to achieve maximum efficiency, complex problems must be appropriately partitioned, especially when contact dominates the computation.« less
NASA Astrophysics Data System (ADS)
Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa
2018-02-01
Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.
Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, J.H.; Michelotti, M.D.; Riemer, N.
2016-10-01
Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removalmore » rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.« less
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-11-25
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.
PIC Simulation of Laser Plasma Interactions with Temporal Bandwidths
NASA Astrophysics Data System (ADS)
Tsung, Frank; Weaver, J.; Lehmberg, R.
2015-11-01
We are performing particle-in-cell simulations using the code OSIRIS to study the effects of laser plasma interactions in the presence of temperal bandwidths under conditions relevant to current and future shock ignition experiments on the NIKE laser. Our simulations show that, for sufficiently large bandwidth, the saturation level, and the distribution of hot electrons, can be effected by the addition of temporal bandwidths (which can be accomplished in experiments using smoothing techniques such as SSD or ISI). We will show that temporal bandwidth along play an important role in the control of LPI's in these lasers and discuss future directions. This work is conducted under the auspices of NRL.
A fast recursive algorithm for molecular dynamics simulation
NASA Technical Reports Server (NTRS)
Jain, A.; Vaidehi, N.; Rodriguez, G.
1993-01-01
The present recursive algorithm for solving molecular systems' dynamical equations of motion employs internal variable models that reduce such simulations' computation time by an order of magnitude, relative to Cartesian models. Extensive use is made of spatial operator methods recently developed for analysis and simulation of the dynamics of multibody systems. A factor-of-450 speedup over the conventional O(N-cubed) algorithm is demonstrated for the case of a polypeptide molecule with 400 residues.
Time-ordered product expansions for computational stochastic system biology.
Mjolsness, Eric
2013-06-01
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.
An Atmospheric Guidance Algorithm Testbed for the Mars Surveyor Program 2001 Orbiter and Lander
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Queen, Eric M.; Powell, Richard W.; Braun, Robert D.; Cheatwood, F. McNeil; Aguirre, John T.; Sachi, Laura A.; Lyons, Daniel T.
1998-01-01
An Atmospheric Flight Team was formed by the Mars Surveyor Program '01 mission office to develop aerocapture and precision landing testbed simulations and candidate guidance algorithms. Three- and six-degree-of-freedom Mars atmospheric flight simulations have been developed for testing, evaluation, and analysis of candidate guidance algorithms for the Mars Surveyor Program 2001 Orbiter and Lander. These simulations are built around the Program to Optimize Simulated Trajectories. Subroutines were supplied by Atmospheric Flight Team members for modeling the Mars atmosphere, spacecraft control system, aeroshell aerodynamic characteristics, and other Mars 2001 mission specific models. This paper describes these models and their perturbations applied during Monte Carlo analyses to develop, test, and characterize candidate guidance algorithms.
A multilevel-skin neighbor list algorithm for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Zhao, Mingcan; Hou, Chaofeng; Ge, Wei
2018-01-01
Searching of the interaction pairs and organization of the interaction processes are important steps in molecular dynamics (MD) algorithms and are critical to the overall efficiency of the simulation. Neighbor lists are widely used for these steps, where thicker skin can reduce the frequency of list updating but is discounted by more computation in distance check for the particle pairs. In this paper, we propose a new neighbor-list-based algorithm with a precisely designed multilevel skin which can reduce unnecessary computation on inter-particle distances. The performance advantages over traditional methods are then analyzed against the main simulation parameters on Intel CPUs and MICs (many integrated cores), and are clearly demonstrated. The algorithm can be generalized for various discrete simulations using neighbor lists.
Assessment of volatile organic compound emissions from ecosystems of China
NASA Astrophysics Data System (ADS)
Klinger, L. F.; Li, Q.-J.; Guenther, A. B.; Greenberg, J. P.; Baker, B.; Bai, J.-H.
2002-11-01
Isoprene, monoterpene, and other volatile organic compound (VOC) emissions from grasslands, shrublands, forests, and peatlands in China were characterized to estimate their regional magnitudes and to compare these emissions with those from landscapes of North America, Europe, and Africa. Ecological and VOC emission sampling was conducted at 52 sites centered in and around major research stations located in seven different regions of China: Inner Mongolia (temperate), Changbai Mountain (boreal-temperate), Beijing Mountain (temperate), Dinghu Mountain (subtropical), Ailao Mountain (subtropical), Kunming (subtropical), and Xishuangbanna (tropical). Transects were used to sample plant species and growth form composition, leafy (green) biomass, and leaf area in forests representing nearly all the major forest types of China. Leafy biomass was determined using generic algorithms based on tree diameter, canopy structure, and absolute cover. Measurements of VOC emissions were made on 386 of the 541 recorded species using a portable photo-ionization detector method. For 105 species, VOC emissions were also measured using a flow-through leaf cuvette sampling/gas chromatography analysis method. Results indicate that isoprene and monoterpene emissions, as well as leafy biomass, vary systematically along gradients of ecological succession in the same manner found in previous studies in the United States, Canada, and Africa. Applying these results to a regional VOC emissions model, we arrive at a value of 21 Tg C for total annual biogenic VOC emissions from China, compared to 5 Tg C of VOCs released annually from anthropogenic sources there. The isoprene and monoterpene emissions are nearly the same as those reported for Europe, which is comparable in size to China.
Solar Occultation Retrieval Algorithm Development
NASA Technical Reports Server (NTRS)
Lumpe, Jerry D.
2004-01-01
This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corbet Jr., Thomas F; Beyeler, Walter E; Vanwestrienen, Dirk
NetFlow Dynamics is a web-accessible analysis environment for simulating dynamic flows of materials on model networks. Performing a simulation requires both the NetFlow Dynamics application and a network model which is a description of the structure of the nodes and edges of a network including the flow capacity of each edge and the storage capacity of each node, and the sources and sinks of the material flowing on the network. NetFlow Dynamics consists of databases for storing network models, algorithms to calculate flows on networks, and a GIS-based graphical interface for performing simulations and viewing simulation results. Simulated flows aremore » dynamic in the sense that flows on each edge of the network and inventories at each node change with time and can be out of equilibrium with boundary conditions. Any number of network models could be simulated using Net Flow Dynamics. To date, the models simulated have been models of petroleum infrastructure. The main model has been the National Transportation Fuels Model (NTFM), a network of U.S. oil fields, transmission pipelines, rail lines, refineries, tank farms, and distribution terminals. NetFlow Dynamics supports two different flow algorithms, the Gradient Flow algorithm and the Inventory Control algorithm, that were developed specifically for the NetFlow Dynamics application. The intent is to add additional algorithms in the future as needed. The ability to select from multiple algorithms is desirable because a single algorithm never covers all analysis needs. The current algorithms use a demand-driven capacity-constrained formulation which means that the algorithms strive to use all available capacity and stored inventory to meet desired flows to sinks, subject to the capacity constraints of each network component. The current flow algorithms are best suited for problems in which a material flows on a capacity-constrained network representing a supply chain in which the material supplied can be stored at each node of the network. In the petroleum models, the flowing materials are crude oil and refined products that can be stored at tank farms, refineries, or terminals (i.e. the nodes of the network). Examples of other network models that could be simulated are currency flowing in a financial network, agricultural products moving to market, or natural gas flowing on a pipeline network.« less
A Contextual Fire Detection Algorithm for Simulated HJ-1B Imagery.
Qian, Yonggang; Yan, Guangjian; Duan, Sibo; Kong, Xiangsheng
2009-01-01
The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m(2) and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m(2), only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m(2), the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection.
Bernal, Javier; Torres-Jimenez, Jose
2015-01-01
SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller’s scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller’s algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller’s algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller’s algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data. PMID:26958442
An adaptive replacement algorithm for paged-memory computer systems.
NASA Technical Reports Server (NTRS)
Thorington, J. M., Jr.; Irwin, J. D.
1972-01-01
A general class of adaptive replacement schemes for use in paged memories is developed. One such algorithm, called SIM, is simulated using a probability model that generates memory traces, and the results of the simulation of this adaptive scheme are compared with those obtained using the best nonlookahead algorithms. A technique for implementing this type of adaptive replacement algorithm with state of the art digital hardware is also presented.
NASA Astrophysics Data System (ADS)
Dyer, Oliver T.; Ball, Robin C.
2017-03-01
We develop a new algorithm for the Brownian dynamics of soft matter systems that evolves time by spatially correlated Monte Carlo moves. The algorithm uses vector wavelets as its basic moves and produces hydrodynamics in the low Reynolds number regime propagated according to the Oseen tensor. When small moves are removed, the correlations closely approximate the Rotne-Prager tensor, itself widely used to correct for deficiencies in Oseen. We also include plane wave moves to provide the longest range correlations, which we detail for both infinite and periodic systems. The computational cost of the algorithm scales competitively with the number of particles simulated, N, scaling as N In N in homogeneous systems and as N in dilute systems. In comparisons to established lattice Boltzmann and Brownian dynamics algorithms, the wavelet method was found to be only a factor of order 1 times more expensive than the cheaper lattice Boltzmann algorithm in marginally semi-dilute simulations, while it is significantly faster than both algorithms at large N in dilute simulations. We also validate the algorithm by checking that it reproduces the correct dynamics and equilibrium properties of simple single polymer systems, as well as verifying the effect of periodicity on the mobility tensor.
Direct model reference adaptive control of robotic arms
NASA Technical Reports Server (NTRS)
Kaufman, Howard; Swift, David C.; Cummings, Steven T.; Shankey, Jeffrey R.
1993-01-01
The results of controlling A PUMA 560 Robotic Manipulator and the NASA shuttle Remote Manipulator System (RMS) using a Command Generator Tracker (CGT) based Model Reference Adaptive Controller (DMRAC) are presented. Initially, the DMRAC algorithm was run in simulation using a detailed dynamic model of the PUMA 560. The algorithm was tuned on the simulation and then used to control the manipulator using minimum jerk trajectories as the desired reference inputs. The ability to track a trajectory in the presence of load changes was also investigated in the simulation. Satisfactory performance was achieved in both simulation and on the actual robot. The obtained responses showed that the algorithm was robust in the presence of sudden load changes. Because these results indicate that the DMRAC algorithm can indeed be successfully applied to the control of robotic manipulators, additional testing was performed to validate the applicability of DMRAC to simulated dynamics of the shuttle RMS.
Data Synchronization Discrepancies in a Formation Flight Control System
NASA Technical Reports Server (NTRS)
Ryan, Jack; Hanson, Curtis E.; Norlin, Ken A.; Allen, Michael J.; Schkolnik, Gerard (Technical Monitor)
2001-01-01
Aircraft hardware-in-the-loop simulation is an invaluable tool to flight test engineers; it reveals design and implementation flaws while operating in a controlled environment. Engineers, however, must always be skeptical of the results and analyze them within their proper context. Engineers must carefully ascertain whether an anomaly that occurs in the simulation will also occur in flight. This report presents a chronology illustrating how misleading simulation timing problems led to the implementation of an overly complex position data synchronization guidance algorithm in place of a simpler one. The report illustrates problems caused by the complex algorithm and how the simpler algorithm was chosen in the end. Brief descriptions of the project objectives, approach, and simulation are presented. The misleading simulation results and the conclusions then drawn are presented. The complex and simple guidance algorithms are presented with flight data illustrating their relative success.
Simulating large atmospheric phase screens using a woofer-tweeter algorithm.
Buscher, David F
2016-10-03
We describe an algorithm for simulating atmospheric wavefront perturbations over ranges of spatial and temporal scales spanning more than 4 orders of magnitude. An open-source implementation of the algorithm written in Python can simulate the evolution of the perturbations more than an order-of-magnitude faster than real time. Testing of the implementation using metrics appropriate to adaptive optics systems and long-baseline interferometers show accuracies at the few percent level or better.
A Network Selection Algorithm Considering Power Consumption in Hybrid Wireless Networks
NASA Astrophysics Data System (ADS)
Joe, Inwhee; Kim, Won-Tae; Hong, Seokjoon
In this paper, we propose a novel network selection algorithm considering power consumption in hybrid wireless networks for vertical handover. CDMA, WiBro, WLAN networks are candidate networks for this selection algorithm. This algorithm is composed of the power consumption prediction algorithm and the final network selection algorithm. The power consumption prediction algorithm estimates the expected lifetime of the mobile station based on the current battery level, traffic class and power consumption for each network interface card of the mobile station. If the expected lifetime of the mobile station in a certain network is not long enough compared the handover delay, this particular network will be removed from the candidate network list, thereby preventing unnecessary handovers in the preprocessing procedure. On the other hand, the final network selection algorithm consists of AHP (Analytic Hierarchical Process) and GRA (Grey Relational Analysis). The global factors of the network selection structure are QoS, cost and lifetime. If user preference is lifetime, our selection algorithm selects the network that offers longest service duration due to low power consumption. Also, we conduct some simulations using the OPNET simulation tool. The simulation results show that the proposed algorithm provides longer lifetime in the hybrid wireless network environment.
Predictive study on the risk of malaria spreading due to global warming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ono, Masaji
Global warming will bring about a temperature elevation, and the habitat of vectors of infectious diseases, such as malaria and dengue fever, will spread into subtropical or temperate zone. The purpose of this study is to simulate the spreading of these diseases through reexamination of existing data and collection of some additional information by field survey. From these data, the author will establish the relationship between meteorological conditions, vector density and malaria occurrence. And then he will simulate and predict the malaria epidemics in case of temperature elevation in southeast Asia and Japan.
Visualizing staggered fields and analyzing electromagnetic data with PerceptEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shasharina, Svetlana
This project resulted in VSimSP: a software for simulating large photonic devices of high-performance computers. It includes: GUI for Photonics Simulations; High-Performance Meshing Algorithm; 2d Order Multimaterials Algorithm; Mode Solver for Waveguides; 2d Order Material Dispersion Algorithm; S Parameters Calculation; High-Performance Workflow at NERSC ; and Large Photonic Devices Simulation Setups We believe we became the only company in the world which can simulate large photonics devices in 3D on modern supercomputers without the need to split them into subparts or do low-fidelity modeling. We started commercial engagement with a manufacturing company.
W. Wang; J. Xiao; S. V. Ollinger; J. Chen; A. Noormets
2014-01-01
Stand-replacing disturbances including harvests have substantial impacts on forest carbon (C) fluxes and stocks. The quantification and simulation of these effects is essential for better understanding forest C dynamics and informing forest management 5 in the context of global change. We evaluated the process-based forest ecosystem model, PnET-CN, for how well and by...
NASA Technical Reports Server (NTRS)
Zhu, Zhifan; Gridnev, Sergei; Windhorst, Robert D.
2015-01-01
This User Guide describes SOSS (Surface Operations Simulator and Scheduler) software build and graphic user interface. SOSS is a desktop application that simulates airport surface operations in fast time using traffic management algorithms. It moves aircraft on the airport surface based on information provided by scheduling algorithm prototypes, monitors separation violation and scheduling conformance, and produces scheduling algorithm performance data.
Sensitivity of CO2 Simulation in a GCM to the Convective Transport Algorithms
NASA Technical Reports Server (NTRS)
Zhu, Z.; Pawson, S.; Collatz, G. J.; Gregg, W. W.; Kawa, S. R.; Baker, D.; Ott, L.
2014-01-01
Convection plays an important role in the transport of heat, moisture and trace gases. In this study, we simulated CO2 concentrations with an atmospheric general circulation model (GCM). Three different convective transport algorithms were used. One is a modified Arakawa-Shubert scheme that was native to the GCM; two others used in two off-line chemical transport models (CTMs) were added to the GCM here for comparison purposes. Advanced CO2 surfaced fluxes were used for the simulations. The results were compared to a large quantity of CO2 observation data. We find that the simulation results are sensitive to the convective transport algorithms. Overall, the three simulations are quite realistic and similar to each other in the remote marine regions, but are significantly different in some land regions with strong fluxes such as Amazon and Siberia during the convection seasons. Large biases against CO2 measurements are found in these regions in the control run, which uses the original GCM. The simulation with the simple diffusive algorithm is better. The difference of the two simulations is related to the very different convective transport speed.
Parameterization of Keeling's network generation algorithm.
Badham, Jennifer; Abbass, Hussein; Stocker, Rob
2008-09-01
Simulation is increasingly being used to examine epidemic behaviour and assess potential management options. The utility of the simulations rely on the ability to replicate those aspects of the social structure that are relevant to epidemic transmission. One approach is to generate networks with desired social properties. Recent research by Keeling and his colleagues has generated simulated networks with a range of properties, and examined the impact of these properties on epidemic processes occurring over the network. However, published work has included only limited analysis of the algorithm itself and the way in which the network properties are related to the algorithm parameters. This paper identifies some relationships between the algorithm parameters and selected network properties (mean degree, degree variation, clustering coefficient and assortativity). Our approach enables users of the algorithm to efficiently generate a network with given properties, thereby allowing realistic social networks to be used as the basis of epidemic simulations. Alternatively, the algorithm could be used to generate social networks with a range of property values, enabling analysis of the impact of these properties on epidemic behaviour.
NASA Astrophysics Data System (ADS)
Eilert, Tobias; Beckers, Maximilian; Drechsler, Florian; Michaelis, Jens
2017-10-01
The analysis tool and software package Fast-NPS can be used to analyse smFRET data to obtain quantitative structural information about macromolecules in their natural environment. In the algorithm a Bayesian model gives rise to a multivariate probability distribution describing the uncertainty of the structure determination. Since Fast-NPS aims to be an easy-to-use general-purpose analysis tool for a large variety of smFRET networks, we established an MCMC based sampling engine that approximates the target distribution and requires no parameter specification by the user at all. For an efficient local exploration we automatically adapt the multivariate proposal kernel according to the shape of the target distribution. In order to handle multimodality, the sampler is equipped with a parallel tempering scheme that is fully adaptive with respect to temperature spacing and number of chains. Since the molecular surrounding of a dye molecule affects its spatial mobility and thus the smFRET efficiency, we introduce dye models which can be selected for every dye molecule individually. These models allow the user to represent the smFRET network in great detail leading to an increased localisation precision. Finally, a tool to validate the chosen model combination is provided. Programme Files doi:http://dx.doi.org/10.17632/7ztzj63r68.1 Licencing provisions: Apache-2.0 Programming language: GUI in MATLAB (The MathWorks) and the core sampling engine in C++ Nature of problem: Sampling of highly diverse multivariate probability distributions in order to solve for macromolecular structures from smFRET data. Solution method: MCMC algorithm with fully adaptive proposal kernel and parallel tempering scheme.
Zhou, Yuting; Xiao, Xiangming; Qin, Yuanwei; Dong, Jinwei; Zhang, Geli; Kou, Weili; Jin, Cui; Wang, Jie; Li, Xiangping
2016-01-01
Accurate and up-to-date information on the spatial distribution of paddy rice fields is necessary for the studies of trace gas emissions, water source management, and food security. The phenology-based paddy rice mapping algorithm, which identifies the unique flooding stage of paddy rice, has been widely used. However, identification and mapping of paddy rice in rice-wetland coexistent areas is still a challenging task. In this study, we found that the flooding/transplanting periods of paddy rice and natural wetlands were different. The natural wetlands flood earlier and have a shorter duration than paddy rice in the Panjin Plain, a temperate region in China. We used this asynchronous flooding stage to extract the paddy rice planting area from the rice-wetland coexistent area. MODIS Land Surface Temperature (LST) data was used to derive the temperature-defined plant growing season. Landsat 8 OLI imagery was used to detect the flooding signal and then paddy rice was extracted using the difference in flooding stages between paddy rice and natural wetlands. The resultant paddy rice map was evaluated with in-situ ground-truth data and Google Earth images. The estimated overall accuracy and Kappa coefficient were 95% and 0.90, respectively. The spatial pattern of OLI-derived paddy rice map agrees well with the paddy rice layer from the National Land Cover Dataset from 2010 (NLCD-2010). The differences between RiceLandsat and RiceNLCD are in the range of ±20% for most 1-km grid cell. The results of this study demonstrate the potential of the phenology-based paddy rice mapping algorithm, via integrating MODIS and Landsat 8 OLI images, to map paddy rice fields in complex landscapes of paddy rice and natural wetland in the temperate region. PMID:27688742
Zhang, Geli; Xiao, Xiangming; Dong, Jinwei; Kou, Weili; Jin, Cui; Qin, Yuanwei; Zhou, Yuting; Wang, Jie; Menarguez, Michael Angelo; Biradar, Chandrashekhar
2016-01-01
Knowledge of the area and spatial distribution of paddy rice is important for assessment of food security, management of water resources, and estimation of greenhouse gas (methane) emissions. Paddy rice agriculture has expanded rapidly in northeastern China in the last decade, but there are no updated maps of paddy rice fields in the region. Existing algorithms for identifying paddy rice fields are based on the unique physical features of paddy rice during the flooding and transplanting phases and use vegetation indices that are sensitive to the dynamics of the canopy and surface water content. However, the flooding phenomena in high latitude area could also be from spring snowmelt flooding. We used land surface temperature (LST) data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor to determine the temporal window of flooding and rice transplantation over a year to improve the existing phenology-based approach. Other land cover types (e.g., evergreen vegetation, permanent water bodies, and sparse vegetation) with potential influences on paddy rice identification were removed (masked out) due to their different temporal profiles. The accuracy assessment using high-resolution images showed that the resultant MODIS-derived paddy rice map of northeastern China in 2010 had a high accuracy (producer and user accuracies of 92% and 96%, respectively). The MODIS-based map also had a comparable accuracy to the 2010 Landsat-based National Land Cover Dataset (NLCD) of China in terms of both area and spatial pattern. This study demonstrated that our improved algorithm by using both thermal and optical MODIS data, provides a robust, simple and automated approach to identify and map paddy rice fields in temperate and cold temperate zones, the northern frontier of rice planting. PMID:27667901
Zhou, Yuting; Xiao, Xiangming; Qin, Yuanwei; Dong, Jinwei; Zhang, Geli; Kou, Weili; Jin, Cui; Wang, Jie; Li, Xiangping
2016-04-01
Accurate and up-to-date information on the spatial distribution of paddy rice fields is necessary for the studies of trace gas emissions, water source management, and food security. The phenology-based paddy rice mapping algorithm, which identifies the unique flooding stage of paddy rice, has been widely used. However, identification and mapping of paddy rice in rice-wetland coexistent areas is still a challenging task. In this study, we found that the flooding/transplanting periods of paddy rice and natural wetlands were different. The natural wetlands flood earlier and have a shorter duration than paddy rice in the Panjin Plain, a temperate region in China. We used this asynchronous flooding stage to extract the paddy rice planting area from the rice-wetland coexistent area. MODIS Land Surface Temperature (LST) data was used to derive the temperature-defined plant growing season. Landsat 8 OLI imagery was used to detect the flooding signal and then paddy rice was extracted using the difference in flooding stages between paddy rice and natural wetlands. The resultant paddy rice map was evaluated with in-situ ground-truth data and Google Earth images. The estimated overall accuracy and Kappa coefficient were 95% and 0.90, respectively. The spatial pattern of OLI-derived paddy rice map agrees well with the paddy rice layer from the National Land Cover Dataset from 2010 (NLCD-2010). The differences between Rice Landsat and Rice NLCD are in the range of ±20% for most 1-km grid cell. The results of this study demonstrate the potential of the phenology-based paddy rice mapping algorithm, via integrating MODIS and Landsat 8 OLI images, to map paddy rice fields in complex landscapes of paddy rice and natural wetland in the temperate region.
Zhang, Geli; Xiao, Xiangming; Dong, Jinwei; Kou, Weili; Jin, Cui; Qin, Yuanwei; Zhou, Yuting; Wang, Jie; Menarguez, Michael Angelo; Biradar, Chandrashekhar
2015-08-01
Knowledge of the area and spatial distribution of paddy rice is important for assessment of food security, management of water resources, and estimation of greenhouse gas (methane) emissions. Paddy rice agriculture has expanded rapidly in northeastern China in the last decade, but there are no updated maps of paddy rice fields in the region. Existing algorithms for identifying paddy rice fields are based on the unique physical features of paddy rice during the flooding and transplanting phases and use vegetation indices that are sensitive to the dynamics of the canopy and surface water content. However, the flooding phenomena in high latitude area could also be from spring snowmelt flooding. We used land surface temperature (LST) data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor to determine the temporal window of flooding and rice transplantation over a year to improve the existing phenology-based approach. Other land cover types (e.g., evergreen vegetation, permanent water bodies, and sparse vegetation) with potential influences on paddy rice identification were removed (masked out) due to their different temporal profiles. The accuracy assessment using high-resolution images showed that the resultant MODIS-derived paddy rice map of northeastern China in 2010 had a high accuracy (producer and user accuracies of 92% and 96%, respectively). The MODIS-based map also had a comparable accuracy to the 2010 Landsat-based National Land Cover Dataset (NLCD) of China in terms of both area and spatial pattern. This study demonstrated that our improved algorithm by using both thermal and optical MODIS data, provides a robust, simple and automated approach to identify and map paddy rice fields in temperate and cold temperate zones, the northern frontier of rice planting.
Quantum Simulation of Tunneling in Small Systems
Sornborger, Andrew T.
2012-01-01
A number of quantum algorithms have been performed on small quantum computers; these include Shor's prime factorization algorithm, error correction, Grover's search algorithm and a number of analog and digital quantum simulations. Because of the number of gates and qubits necessary, however, digital quantum particle simulations remain untested. A contributing factor to the system size required is the number of ancillary qubits needed to implement matrix exponentials of the potential operator. Here, we show that a set of tunneling problems may be investigated with no ancillary qubits and a cost of one single-qubit operator per time step for the potential evolution, eliminating at least half of the quantum gates required for the algorithm and more than that in the general case. Such simulations are within reach of current quantum computer architectures. PMID:22916333
Dib, Alain E; Johnson, Chris E; Driscoll, Charles T; Fahey, Timothy J; Hayhoe, Katharine
2014-05-01
Carbon (C) sequestration in forest biomass and soils may help decrease regional C footprints and mitigate future climate change. The efficacy of these practices must be verified by monitoring and by approved calculation methods (i.e., models) to be credible in C markets. Two widely used soil organic matter models - CENTURY and RothC - were used to project changes in SOC pools after clear-cutting disturbance, as well as under a range of future climate and atmospheric carbon dioxide (CO(2) ) scenarios. Data from the temperate, predominantly deciduous Hubbard Brook Experimental Forest (HBEF) in New Hampshire, USA, were used to parameterize and validate the models. Clear-cutting simulations demonstrated that both models can effectively simulate soil C dynamics in the northern hardwood forest when adequately parameterized. The minimum postharvest SOC predicted by RothC occurred in postharvest year 14 and was within 1.5% of the observed minimum, which occurred in year 8. CENTURY predicted the postharvest minimum SOC to occur in year 45, at a value 6.9% greater than the observed minimum; the slow response of both models to disturbance suggests that they may overestimate the time required to reach new steady-state conditions. Four climate change scenarios were used to simulate future changes in SOC pools. Climate-change simulations predicted increases in SOC by as much as 7% at the end of this century, partially offsetting future CO(2) emissions. This sequestration was the product of enhanced forest productivity, and associated litter input to the soil, due to increased temperature, precipitation and CO(2) . The simulations also suggested that considerable losses of SOC (8-30%) could occur if forest vegetation at HBEF does not respond to changes in climate and CO(2) levels. Therefore, the source/sink behavior of temperate forest soils likely depends on the degree to which forest growth is stimulated by new climate and CO(2) conditions. © 2013 John Wiley & Sons Ltd.
Persistent random walk of cells involving anomalous effects and random death
NASA Astrophysics Data System (ADS)
Fedotov, Sergei; Tan, Abby; Zubarev, Andrey
2015-04-01
The purpose of this paper is to implement a random death process into a persistent random walk model which produces sub-ballistic superdiffusion (Lévy walk). We develop a stochastic two-velocity jump model of cell motility for which the switching rate depends upon the time which the cell has spent moving in one direction. It is assumed that the switching rate is a decreasing function of residence (running) time. This assumption leads to the power law for the velocity switching time distribution. This describes the anomalous persistence of cell motility: the longer the cell moves in one direction, the smaller the switching probability to another direction becomes. We derive master equations for the cell densities with the generalized switching terms involving the tempered fractional material derivatives. We show that the random death of cells has an important implication for the transport process through tempering of the superdiffusive process. In the long-time limit we write stationary master equations in terms of exponentially truncated fractional derivatives in which the rate of death plays the role of tempering of a Lévy jump distribution. We find the upper and lower bounds for the stationary profiles corresponding to the ballistic transport and diffusion with the death-rate-dependent diffusion coefficient. Monte Carlo simulations confirm these bounds.
Ozone-induced stomatal sluggishness changes carbon and water balance of temperate deciduous forests.
Hoshika, Yasutomo; Katata, Genki; Deushi, Makoto; Watanabe, Makoto; Koike, Takayoshi; Paoletti, Elena
2015-05-06
Tropospheric ozone concentrations have increased by 60-100% in the Northern Hemisphere since the 19(th) century. The phytotoxic nature of ozone can impair forest productivity. In addition, ozone affects stomatal functions, by both favoring stomatal closure and impairing stomatal control. Ozone-induced stomatal sluggishness, i.e., a delay in stomatal responses to fluctuating stimuli, has the potential to change the carbon and water balance of forests. This effect has to be included in models for ozone risk assessment. Here we examine the effects of ozone-induced stomatal sluggishness on carbon assimilation and transpiration of temperate deciduous forests in the Northern Hemisphere in 2006-2009 by combining a detailed multi-layer land surface model and a global atmospheric chemistry model. An analysis of results by ozone FACE (Free-Air Controlled Exposure) experiments suggested that ozone-induced stomatal sluggishness can be incorporated into modelling based on a simple parameter (gmin, minimum stomatal conductance) which is used in the coupled photosynthesis-stomatal model. Our simulation showed that ozone can decrease water use efficiency, i.e., the ratio of net CO2 assimilation to transpiration, of temperate deciduous forests up to 20% when ozone-induced stomatal sluggishness is considered, and up to only 5% when the stomatal sluggishness is neglected.
NASA Astrophysics Data System (ADS)
Moon, Joonoh; Lee, Chang-Hoon; Lee, Tae-Ho; Kim, Hyoung Chan
2015-01-01
The phase transformation and mechanical properties in the weld heat-affected zone (HAZ) of a reduced activation ferritic/martensitic steel were explored. The samples for HAZs were prepared using a Gleeble simulator at different heat inputs. The base steel consisted of tempered martensite and carbides through quenching and tempering treatment, whereas the HAZs consisted of martensite, δ-ferrite, and a small volume of autotempered martensite. The prior austenite grain size, lath width of martensite, and δ-ferrite fraction in the HAZs increased with increase in the heat input. The mechanical properties were evaluated using Vickers hardness and Charpy V-notch impact test. The Vickers hardness in the HAZs was higher than that in the base steel but did not change noticeably with increase in the heat input. The HAZs showed poor impact property due to the formation of martensite and δ-ferrite as compared to the base steel. In addition, the impact property of the HAZs deteriorated more with the increase in the heat input. Post weld heat treatment contributed to improve the impact property of the HAZs through the formation of tempered martensite, but the impact property of the HAZs remained lower than that of base steel.
The Change in the area of various land covers on the Tibetan Plateau during 1957-2015
NASA Astrophysics Data System (ADS)
Cuo, Lan; Zhang, Yongxin
2017-04-01
With average elevation of 4000 m and area of 2.5×106 km2, Tibetan Plateau hosts various fragile ecosystems such as perennial alpine meadow, perennial alpine steppe, temperate evergreen needleleaf trees, temperate deciduous trees, temperate shrub grassland, and barely vegetated desert. Perennial alpine meadow and steppe are the two dominant vegetation types on the heartland of the plateau. MODIS Leaf Area Index (LAI) ranges from 0 to 2 in most part of the plateau. With climate change, these ecosystems are expected to undergo alteration. This study uses a dynamic vegetation model - Lund-Potsdam-Jena (LPJ) to investigate the change of the barely vegetated area and other vegetation types caused by climate change during 1957-2015 on the Tibetan Plateau. Model simulated foliage projective coverage (FPC) and plant functional types (PFTs) are selected for the investigation. The model is evaluated first using both field surveyed land cover map and MODIS LAI images. Long term trends of vegetation FPC is examined. Decadal variations of vegetated and barely vegetated land are compared. The impacts of extreme precipitation, air temperature and CO2 on the expansion and contraction of barely vegetated and vegetated areas are shown. The study will identify the dominant climate factors in affecting the desert area in the region.
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-01-01
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design. PMID:25404761
Numerically robust and efficient nonlocal electron transport in 2D DRACO simulations
NASA Astrophysics Data System (ADS)
Cao, Duc; Chenhall, Jeff; Moses, Greg; Delettrez, Jacques; Collins, Tim
2013-10-01
An improved implicit algorithm based on Schurtz, Nicolai and Busquet (SNB) algorithm for nonlocal electron transport is presented. Validation with direct drive shock timing experiments and verification with the Goncharov nonlocal model in 1D LILAC simulations demonstrate the viability of this efficient algorithm for producing 2D lagrangian radiation hydrodynamics direct drive simulations. Additionally, simulations provide strong incentive to further modify key parameters within the SNB theory, namely the ``mean free path.'' An example 2D polar drive simulation to study 2D effects of the nonlocal flux as well as mean free path modifications will also be presented. This research was supported by the University of Rochester Laboratory for Laser Energetics.
Liquid-liquid transition in the ST2 model of water
NASA Astrophysics Data System (ADS)
Debenedetti, Pablo
2013-03-01
We present clear evidence of the existence of a metastable liquid-liquid phase transition in the ST2 model of water. Using four different techniques (the weighted histogram analysis method with single-particle moves, well-tempered metadynamics with single-particle moves, weighted histograms with parallel tempering and collective particle moves, and conventional molecular dynamics), we calculate the free energy surface over a range of thermodynamic conditions, we perform a finite size scaling analysis for the free energy barrier between the coexisting liquid phases, we demonstrate the attainment of diffusive behavior, and we perform stringent thermodynamic consistency checks. The results provide conclusive evidence of a first-order liquid-liquid transition. We also show that structural equilibration in the sluggish low-density phase is attained over the time scale of our simulations, and that crystallization times are significantly longer than structural equilibration, even under deeply supercooled conditions. We place our results in the context of the theory of metastability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faduska, A.; Rau, E.; Alger, J.V.
Data are given on the corrosion properties of type 410 stainless steel tempered at 1150 d F. Control mechanismn-drive motor tubes and some outer housings are constructed of 650 d F tempered type 410 stainless steel. Since the stress corrosion resistance of type 410 in the 1150 d F tempered condition is superior, the utilization of the 1150 d F tempered material is more desirable for this application. The properties of 410 stainless steel hardened and tempered at 1150 d F are given. (W.L.H.)
A hierarchical exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Orendorff, David; Mjolsness, Eric
2012-12-01
A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical modeling. PMID:27044039
Active Control of Wind Tunnel Noise
NASA Technical Reports Server (NTRS)
Hollis, Patrick (Principal Investigator)
1991-01-01
The need for an adaptive active control system was realized, since a wind tunnel is subjected to variations in air velocity, temperature, air turbulence, and some other factors such as nonlinearity. Among many adaptive algorithms, the Least Mean Squares (LMS) algorithm, which is the simplest one, has been used in an Active Noise Control (ANC) system by some researchers. However, Eriksson's results, Eriksson (1985), showed instability in the ANC system with an ER filter for random noise input. The Restricted Least Squares (RLS) algorithm, although computationally more complex than the LMS algorithm, has better convergence and stability properties. The ANC system in the present work was simulated by using an FIR filter with an RLS algorithm for different inputs and for a number of plant models. Simulation results for the ANC system with acoustic feedback showed better robustness when used with the RLS algorithm than with the LMS algorithm for all types of inputs. Overall attenuation in the frequency domain was better in the case of the RLS adaptive algorithm. Simulation results with a more realistic plant model and an RLS adaptive algorithm showed a slower convergence rate than the case with an acoustic plant as a delay plant. However, the attenuation properties were satisfactory for the simulated system with the modified plant. The effect of filter length on the rate of convergence and attenuation was studied. It was found that the rate of convergence decreases with increase in filter length, whereas the attenuation increases with increase in filter length. The final design of the ANC system was simulated and found to have a reasonable convergence rate and good attenuation properties for an input containing discrete frequencies and random noise.
Texture and Tempered Condition Combined Effects on Fatigue Behavior in an Al-Cu-Li Alloy
NASA Astrophysics Data System (ADS)
Wang, An; Liu, Zhiyi; Liu, Meng; Wu, Wenting; Bai, Song; Yang, Rongxian
2017-05-01
Texture and tempered condition combined effects on fatigue behavior in an Al-Cu-Li alloy have been investigated using tensile testing, cyclic loading testing, scanning electron microscope (SEM), transmission electron microscopy (TEM) and texture analysis. Results showed that in near-threshold region, T4-tempered samples possessed the lowest fatigue crack propagation (FCP) rate. In Paris regime, T4-tempered sample had similar FCP rate with T6-tempered sample. T83-tempered sample exhibited the greatest FCP rate among the three tempered conditions. 3% pre-stretching in T83-tempered sample resulted in a reducing intensity of Goss texture and facilitated T1 precipitation. SEM results showed that less crack deflection was observed in T83-tempered sample, as compared to other two tempered samples. It was the combined effects of a lower intensity of Goss texture and T1 precipitates retarding the reversible dislocation slipping in the plastic zone ahead the crack tip.
A software framework for pipelined arithmetic algorithms in field programmable gate arrays
NASA Astrophysics Data System (ADS)
Kim, J. B.; Won, E.
2018-03-01
Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.
Perrin, Jean-Baptiste; Durand, Benoît; Gay, Emilie; Ducrot, Christian; Hendrikx, Pascal; Calavas, Didier; Hénaux, Viviane
2015-01-01
We performed a simulation study to evaluate the performances of an anomaly detection algorithm considered in the frame of an automated surveillance system of cattle mortality. The method consisted in a combination of temporal regression and spatial cluster detection which allows identifying, for a given week, clusters of spatial units showing an excess of deaths in comparison with their own historical fluctuations. First, we simulated 1,000 outbreaks of a disease causing extra deaths in the French cattle population (about 200,000 herds and 20 million cattle) according to a model mimicking the spreading patterns of an infectious disease and injected these disease-related extra deaths in an authentic mortality dataset, spanning from January 2005 to January 2010. Second, we applied our algorithm on each of the 1,000 semi-synthetic datasets to identify clusters of spatial units showing an excess of deaths considering their own historical fluctuations. Third, we verified if the clusters identified by the algorithm did contain simulated extra deaths in order to evaluate the ability of the algorithm to identify unusual mortality clusters caused by an outbreak. Among the 1,000 simulations, the median duration of simulated outbreaks was 8 weeks, with a median number of 5,627 simulated deaths and 441 infected herds. Within the 12-week trial period, 73% of the simulated outbreaks were detected, with a median timeliness of 1 week, and a mean of 1.4 weeks. The proportion of outbreak weeks flagged by an alarm was 61% (i.e. sensitivity) whereas one in three alarms was a true alarm (i.e. positive predictive value). The performances of the detection algorithm were evaluated for alternative combination of epidemiologic parameters. The results of our study confirmed that in certain conditions automated algorithms could help identifying abnormal cattle mortality increases possibly related to unidentified health events.
Perrin, Jean-Baptiste; Durand, Benoît; Gay, Emilie; Ducrot, Christian; Hendrikx, Pascal; Calavas, Didier; Hénaux, Viviane
2015-01-01
We performed a simulation study to evaluate the performances of an anomaly detection algorithm considered in the frame of an automated surveillance system of cattle mortality. The method consisted in a combination of temporal regression and spatial cluster detection which allows identifying, for a given week, clusters of spatial units showing an excess of deaths in comparison with their own historical fluctuations. First, we simulated 1,000 outbreaks of a disease causing extra deaths in the French cattle population (about 200,000 herds and 20 million cattle) according to a model mimicking the spreading patterns of an infectious disease and injected these disease-related extra deaths in an authentic mortality dataset, spanning from January 2005 to January 2010. Second, we applied our algorithm on each of the 1,000 semi-synthetic datasets to identify clusters of spatial units showing an excess of deaths considering their own historical fluctuations. Third, we verified if the clusters identified by the algorithm did contain simulated extra deaths in order to evaluate the ability of the algorithm to identify unusual mortality clusters caused by an outbreak. Among the 1,000 simulations, the median duration of simulated outbreaks was 8 weeks, with a median number of 5,627 simulated deaths and 441 infected herds. Within the 12-week trial period, 73% of the simulated outbreaks were detected, with a median timeliness of 1 week, and a mean of 1.4 weeks. The proportion of outbreak weeks flagged by an alarm was 61% (i.e. sensitivity) whereas one in three alarms was a true alarm (i.e. positive predictive value). The performances of the detection algorithm were evaluated for alternative combination of epidemiologic parameters. The results of our study confirmed that in certain conditions automated algorithms could help identifying abnormal cattle mortality increases possibly related to unidentified health events. PMID:26536596
Wakschlag, Lauren S.; Choi, Seung W.; Carter, Alice S.; Hullsiek, Heide; Burns, James; McCarthy, Kimberly; Leibenluft, Ellen; Briggs-Gowan, Margaret J.
2013-01-01
Background Temper modulation problems are both a hallmark of early childhood and a common mental health concern. Thus, characterizing specific behavioral manifestations of temper loss along a dimension from normative misbehaviors to clinically significant problems is an important step toward identifying clinical thresholds. Methods Parent-reported patterns of temper loss were delineated in a diverse community sample of preschoolers (n = 1,490). A developmentally sensitive questionnaire, the Multidimensional Assessment of Preschool Disruptive Behavior (MAP-DB), was used to assess temper loss in terms of tantrum features and anger regulation. Specific aims were: (a) document the normative distribution of temper loss in preschoolers from normative misbehaviors to clinically concerning temper loss behaviors, and test for sociodemographic differences; (b) use Item Response Theory (IRT) to model a Temper Loss dimension; and (c) examine associations of temper loss and concurrent emotional and behavioral problems. Results Across sociodemographic subgroups, a unidimensional Temper Loss model fit the data well. Nearly all (83.7%) preschoolers had tantrums sometimes but only 8.6% had daily tantrums. Normative misbehaviors occurred more frequently than clinically concerning temper loss behaviors. Milder behaviors tended to reflect frustration in expectable contexts, whereas clinically concerning problem indicators were unpredictable, prolonged, and/or destructive. In multivariate models, Temper Loss was associated with emotional and behavioral problems. Conclusions Parent reports on a developmentally informed questionnaire, administered to a large and diverse sample, distinguished normative and problematic manifestations of preschool temper loss. A developmental, dimensional approach shows promise for elucidating the boundaries between normative early childhood temper loss and emergent psychopathology. PMID:22928674
Post-processing interstitialcy diffusion from molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Bhardwaj, U.; Bukkuru, S.; Warrier, M.
2016-01-01
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures is studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms.
Post-processing interstitialcy diffusion from molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhardwaj, U., E-mail: haptork@gmail.com; Bukkuru, S.; Warrier, M.
2016-01-15
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures ismore » studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms. -- Graphical abstract:.« less
Baudracco, J; Lopez-Villalobos, N; Holmes, C W; Comeron, E A; Macdonald, K A; Barry, T N; Friggens, N C
2012-06-01
This animal simulation model, named e-Cow, represents a single dairy cow at grazing. The model integrates algorithms from three previously published models: a model that predicts herbage dry matter (DM) intake by grazing dairy cows, a mammary gland model that predicts potential milk yield and a body lipid model that predicts genetically driven live weight (LW) and body condition score (BCS). Both nutritional and genetic drives are accounted for in the prediction of energy intake and its partitioning. The main inputs are herbage allowance (HA; kg DM offered/cow per day), metabolisable energy and NDF concentrations in herbage and supplements, supplements offered (kg DM/cow per day), type of pasture (ryegrass or lucerne), days in milk, days pregnant, lactation number, BCS and LW at calving, breed or strain of cow and genetic merit, that is, potential yields of milk, fat and protein. Separate equations are used to predict herbage intake, depending on the cutting heights at which HA is expressed. The e-Cow model is written in Visual Basic programming language within Microsoft Excel®. The model predicts whole-lactation performance of dairy cows on a daily basis, and the main outputs are the daily and annual DM intake, milk yield and changes in BCS and LW. In the e-Cow model, neither herbage DM intake nor milk yield or LW change are needed as inputs; instead, they are predicted by the e-Cow model. The e-Cow model was validated against experimental data for Holstein-Friesian cows with both North American (NA) and New Zealand (NZ) genetics grazing ryegrass-based pastures, with or without supplementary feeding and for three complete lactations, divided into weekly periods. The model was able to predict animal performance with satisfactory accuracy, with concordance correlation coefficients of 0.81, 0.76 and 0.62 for herbage DM intake, milk yield and LW change, respectively. Simulations performed with the model showed that it is sensitive to genotype by feeding environment interactions. The e-Cow model tended to overestimate the milk yield of NA genotype cows at low milk yields, while it underestimated the milk yield of NZ genotype cows at high milk yields. The approach used to define the potential milk yield of the cow and equations used to predict herbage DM intake make the model applicable for predictions in countries with temperate pastures.
X-ray simulation algorithms used in ISP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, John P.
ISP is a simulation code which is sometimes used in the USNDS program. ISP is maintained by Sandia National Lab. However, the X-ray simulation algorithm used by ISP was written by scientists at LANL – mainly by Ed Fenimore with some contributions from John Sullivan and George Neuschaefer and probably others. In email to John Sullivan on July 25, 2016, Jill Rivera, ISP project lead, said “ISP uses the function xdosemeters_sim from the xgen library.” The is a fortran subroutine which is also used to simulate the X-ray response in consim (a descendant of xgen). Therefore, no separate documentation ofmore » the X-ray simulation algorithms in ISP have been written – the documentation for the consim simulation can be used.« less
A Contextual Fire Detection Algorithm for Simulated HJ-1B Imagery
Qian, Yonggang; Yan, Guangjian; Duan, Sibo; Kong, Xiangsheng
2009-01-01
The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m2 and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m2, only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m2, the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection. PMID:22399950
Urbanowicz, Ryan J; Kiralis, Jeff; Sinnott-Armstrong, Nicholas A; Heberling, Tamra; Fisher, Jonathan M; Moore, Jason H
2012-10-01
Geneticists who look beyond single locus disease associations require additional strategies for the detection of complex multi-locus effects. Epistasis, a multi-locus masking effect, presents a particular challenge, and has been the target of bioinformatic development. Thorough evaluation of new algorithms calls for simulation studies in which known disease models are sought. To date, the best methods for generating simulated multi-locus epistatic models rely on genetic algorithms. However, such methods are computationally expensive, difficult to adapt to multiple objectives, and unlikely to yield models with a precise form of epistasis which we refer to as pure and strict. Purely and strictly epistatic models constitute the worst-case in terms of detecting disease associations, since such associations may only be observed if all n-loci are included in the disease model. This makes them an attractive gold standard for simulation studies considering complex multi-locus effects. We introduce GAMETES, a user-friendly software package and algorithm which generates complex biallelic single nucleotide polymorphism (SNP) disease models for simulation studies. GAMETES rapidly and precisely generates random, pure, strict n-locus models with specified genetic constraints. These constraints include heritability, minor allele frequencies of the SNPs, and population prevalence. GAMETES also includes a simple dataset simulation strategy which may be utilized to rapidly generate an archive of simulated datasets for given genetic models. We highlight the utility and limitations of GAMETES with an example simulation study using MDR, an algorithm designed to detect epistasis. GAMETES is a fast, flexible, and precise tool for generating complex n-locus models with random architectures. While GAMETES has a limited ability to generate models with higher heritabilities, it is proficient at generating the lower heritability models typically used in simulation studies evaluating new algorithms. In addition, the GAMETES modeling strategy may be flexibly combined with any dataset simulation strategy. Beyond dataset simulation, GAMETES could be employed to pursue theoretical characterization of genetic models and epistasis.
Experiences with serial and parallel algorithms for channel routing using simulated annealing
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1988-01-01
Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.
ERIC Educational Resources Information Center
Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin
2007-01-01
Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…
D.J. Nicolsky; V.E. Romanovsky; G.G. Panteleev
2008-01-01
A variational data assimilation algorithm is developed to reconstruct thermal properties, porosity, and parametrization of the unfrozen water content for fully saturated soils. The algorithm is tested with simulated synthetic temperatures. The simulations are performed to determine the robustness and sensitivity of algorithm to estimate soil properties from in-situ...
NASA Astrophysics Data System (ADS)
Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.
2013-08-01
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0…tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution time/parallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f , (e.g. Verlet algorithm) is available to propagate the system from time ti (trajectory positions and velocities xi = (ri; vi)) to time ti+1 (xi+1) by xi+1 = fi(xi), the dynamics problem spanning an interval from t0 : : : tM can be transformed into a root finding problem, F(X) = [xi - f (x(i-1)]i=1;M = 0, for the trajectory variables. The root finding problem is solved using amore » variety of optimization techniques, including quasi-Newton and preconditioned quasi-Newton optimization schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed and the effectiveness of various approaches to solving the root finding problem are tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl+4H2O AIMD simulation at the MP2 level. The maximum speedup obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow TCP/IP networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl+4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. By using these algorithms we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 seconds per time step to 6.9 seconds per time step.« less
Bylaska, Eric J; Weare, Jonathan Q; Weare, John H
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0[ellipsis (horizontal)]tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution/timeparallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.
NASA Astrophysics Data System (ADS)
Huang, Yin; Chen, Jianhua; Xiong, Shaojun
2009-07-01
Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.
Simulation and performance of an artificial retina for 40 MHz track reconstruction
Abba, A.; Bedeschi, F.; Citterio, M.; ...
2015-03-05
We present the results of a detailed simulation of the artificial retina pattern-recognition algorithm, designed to reconstruct events with hundreds of charged-particle tracks in pixel and silicon detectors at LHCb with LHC crossing frequency of 40 MHz. Performances of the artificial retina algorithm are assessed using the official Monte Carlo samples of the LHCb experiment. We found performances for the retina pattern-recognition algorithm comparable with the full LHCb reconstruction algorithm.
Ma, Xiaosu; Chien, Jenny Y; Johnson, Jennal; Malone, James; Sinha, Vikram
2017-08-01
The purpose of this prospective, model-based simulation approach was to evaluate the impact of various rapid-acting mealtime insulin dose-titration algorithms on glycemic control (hemoglobin A1c [HbA1c]). Seven stepwise, glucose-driven insulin dose-titration algorithms were evaluated with a model-based simulation approach by using insulin lispro. Pre-meal blood glucose readings were used to adjust insulin lispro doses. Two control dosing algorithms were included for comparison: no insulin lispro (basal insulin+metformin only) or insulin lispro with fixed doses without titration. Of the seven dosing algorithms assessed, daily adjustment of insulin lispro dose, when glucose targets were met at pre-breakfast, pre-lunch, and pre-dinner, sequentially, demonstrated greater HbA1c reduction at 24 weeks, compared with the other dosing algorithms. Hypoglycemic rates were comparable among the dosing algorithms except for higher rates with the insulin lispro fixed-dose scenario (no titration), as expected. The inferior HbA1c response for the "basal plus metformin only" arm supports the additional glycemic benefit with prandial insulin lispro. Our model-based simulations support a simplified dosing algorithm that does not include carbohydrate counting, but that includes glucose targets for daily dose adjustment to maintain glycemic control with a low risk of hypoglycemia.
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
Wu, Tianmin; Yang, Lijiang; Zhang, Ruiting; Shao, Qiang; Zhuang, Wei
2013-07-25
We simulated the equilibrium isotope-edited FTIR and 2DIR spectra of a β-hairpin peptide trpzip2 at a series of temperatures. The simulation was based on the configuration distributions generated using the GB(OBC) implicit solvent model and the integrated tempering sampling (ITS) technique. A soaking procedure was adapted to generate the peptide in explicit solvent configurations for the spectroscopy calculations. The nonlinear exciton propagation (NEP) method was then used to calculate the spectra. Agreeing with the experiments, the intensities and ellipticities of the isotope-shifted peaks in our simulated signals have the site-specific temperature dependences, which suggest the inhomogeneous local thermal stabilities along the peptide chain. Our simulation thus proposes a cost-effective means to understand a peptide's conformational change and related IR spectra across its thermal unfolding transition.
NASA Technical Reports Server (NTRS)
Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret
1992-01-01
Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.
NASA Astrophysics Data System (ADS)
Zhang, Chi; Ren, Wei
2017-09-01
Central Asia covers a large land area of 5 × 106 km2 and has unique temperate dryland ecosystems, with over 80% of the world's temperate deserts, which has been experiencing dramatic warming and drought in the recent decades. How the temperate dryland responds to complex climate change, however, is still far from clear. This study quantitatively investigates terrestrial net primary productivity (NPP) in responses to temperature, precipitation, and atmospheric CO2 during 1980-2014, by using the Arid Ecosystem Model, which can realistically predict ecosystems' responses to changes in climate and atmospheric CO2 according to model evaluation against 28 field experiments/observations. The simulation results show that unlike other middle-/high-latitude regions, NPP in central Asia declined by 10% (0.12 × 1015 g C) since the 1980s in response to a warmer and drier climate. The dryland's response to warming was weak, while its cropland was sensitive to the CO2 fertilization effect (CFE). However, the CFE was inhibited by the long-term drought from 1998 to 2008 and the positive effect of warming on photosynthesis was largely offset by the enhanced water deficit. The complex interactive effects among climate drivers, unique responses from diverse ecosystem types, and intensive and heterogeneous climatic changes led to highly complex NPP changing patterns in central Asia, of which 69% was dominated by precipitation variation and 20% and 9% was dominated by CO2 and temperature, respectively. The Turgay Plateau in northern Kazakhstan and southern Xinjiang in China are hot spots of NPP degradation in response to climate change during the past three decades and in the future.
Schwerdtfeger, Peter; Smits, Odile; Pahl, Elke; Jerabek, Paul
2018-06-12
State-of-the-art relativistic coupled-cluster theory is used to construct many-body potentials for the rare gas element radon in order to determine its bulk properties including the solid-to-liquid phase transition from parallel tempering Monte Carlo simulations through either direct sampling of the bulk or from a finite cluster approach. The calculated melting temperature are 201(3) K and 201(6) K from bulk simulations and from extrapolation of finite cluster values, respectively. This is in excellent agreement with the often debated (but widely cited) and only available value of 202 K, dating back to measurements by Gray and Ramsay in 1909. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Holm, J. A.; Knox, R. G.; Koven, C.; Riley, W. J.; Bisht, G.; Fisher, R.; Christoffersen, B. O.; Dietze, M.; Chambers, J. Q.
2017-12-01
The inclusion of dynamic vegetation demography in Earth System Models (ESMs) has been identified as a critical step in moving ESMs towards more realistic representations of plant ecology and the processes that govern climatically important fluxes of carbon, energy, and water. Successful application of dynamic vegetation models, and process-based approaches to simulate plant demography, succession, and response to disturbances without climate envelopes at the global scale is a challenging endeavor. We integrated demographic processes using the Functionally-Assembled Terrestrial Ecosystem Simulator (FATES) in the newly developed ACME Land Model (ALM). We then use an ALM-FATES globally gridded simulation for the first time to investigate plant functional type (PFT) distributions and dynamic turnover rates. Initial global simulations successfully include six interacting and competing PFTs (ranging from tropical to boreal, evergreen, deciduous, needleleaf and broadleaf); including more PFTs is planned. Global maps of net primary productivity, leaf area index, and total vegetation biomass by ALM-FATES matched patterns and values when compared to CLM4.5-BGC and MODIS estimates. We also present techniques for PFT parameterization based on the Predictive Ecosystem Analyzer (PEcAn), field based turnover rates, improved PFT groupings based on trait-tradeoffs, and improved representation of multiple canopy positions. Finally, we applied the improved ALM-FATES model at a central Amazon tropical and western U.S. temperate sites and demonstrate improvements in predicted PFT size- and age-structure and regional distribution. Results from the Amazon tropical site investigate the ability and magnitude of a tropical forest to act as a carbon sink by 2100 with a doubling of CO2, while results from the temperate sites investigate the response of forest mortality with increasing droughts.
NASA Astrophysics Data System (ADS)
Mononen, Mika E.; Tanska, Petri; Isaksson, Hanna; Korhonen, Rami K.
2016-02-01
We present a novel algorithm combined with computational modeling to simulate the development of knee osteoarthritis. The degeneration algorithm was based on excessive and cumulatively accumulated stresses within knee joint cartilage during physiological gait loading. In the algorithm, the collagen network stiffness of cartilage was reduced iteratively if excessive maximum principal stresses were observed. The developed algorithm was tested and validated against experimental baseline and 4-year follow-up Kellgren-Lawrence grades, indicating different levels of cartilage degeneration at the tibiofemoral contact region. Test groups consisted of normal weight and obese subjects with the same gender and similar age and height without osteoarthritic changes. The algorithm accurately simulated cartilage degeneration as compared to the Kellgren-Lawrence findings in the subject group with excess weight, while the healthy subject group’s joint remained intact. Furthermore, the developed algorithm followed the experimentally found trend of cartilage degeneration in the obese group (R2 = 0.95, p < 0.05 experiments vs. model), in which the rapid degeneration immediately after initiation of osteoarthritis (0-2 years, p < 0.001) was followed by a slow or negligible degeneration (2-4 years, p > 0.05). The proposed algorithm revealed a great potential to objectively simulate the progression of knee osteoarthritis.
Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants
NASA Astrophysics Data System (ADS)
Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo
2017-10-01
Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.
A Simulation of Readiness-Based Sparing Policies
2017-06-01
variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the...available in the optimization tools. 14. SUBJECT TERMS readiness-based sparing, discrete event simulation, optimization, multi-indenture...variant of a greedy heuristic algorithm to set stock levels and estimate overall WS availability. Our discrete event simulation is then used to test the
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-07-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1989-01-01
The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.
STOCHASTIC INTEGRATION FOR TEMPERED FRACTIONAL BROWNIAN MOTION.
Meerschaert, Mark M; Sabzikar, Farzad
2014-07-01
Tempered fractional Brownian motion is obtained when the power law kernel in the moving average representation of a fractional Brownian motion is multiplied by an exponential tempering factor. This paper develops the theory of stochastic integrals for tempered fractional Brownian motion. Along the way, we develop some basic results on tempered fractional calculus.
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
Software for Simulating a Complex Robot
NASA Technical Reports Server (NTRS)
Goza, S. Michael
2003-01-01
RoboSim (Robot Simulation) is a computer program that simulates the poses and motions of the Robonaut a developmental anthropomorphic robot that has a complex system of joints with 43 degrees of freedom and multiple modes of operation and control. RoboSim performs a full kinematic simulation of all degrees of freedom. It also includes interface components that duplicate the functionality of the real Robonaut interface with control software and human operators. Basically, users see no difference between the real Robonaut and the simulation. Consequently, new control algorithms can be tested by computational simulation, without risk to the Robonaut hardware, and without using excessive Robonaut-hardware experimental time, which is always at a premium. Previously developed software incorporated into RoboSim includes Enigma (for graphical displays), OSCAR (for kinematical computations), and NDDS (for communication between the Robonaut and external software). In addition, RoboSim incorporates unique inverse-kinematical algorithms for chains of joints that have fewer than six degrees of freedom (e.g., finger joints). In comparison with the algorithms of OSCAR, these algorithms are more readily adaptable and provide better results when using equivalent sets of data.
Computational Thermodynamics Characterization of 7075, 7039, and 7020 Aluminum Alloys Using JMatPro
2011-09-01
parameters of temperature and time may be selected to simulate effects on microstructure during annealing , solution treating, quenching, and tempering...nucleation may be taken into account by use of a wetting angle function. Activation energy may be taken into account for rapidly quenched alloys...the stable forms of precipitates that result from solutionizing, annealing or intermediate heat treatment, and phase formation during nonequilibrium
Estimating the capital recovery costs of alternative patch retention treatments in eastern hardwoods
Chris B. LeDoux; Andrew Whitman
2006-01-01
We used a simulation model to estimate the economic opportunity costs and the density of large stems retained for patch retention in two temperate oak stands representative of the oak/hickory forest type in the eastern United States. Opportunity/retention costs ranged from $321.0 to $760.7/ha [$129.9 to $307.8/acre] depending on the species mix in the stand, the...
Exact and approximate stochastic simulation of intracellular calcium dynamics.
Wieder, Nicolas; Fink, Rainer H A; Wegner, Frederic von
2011-01-01
In simulations of chemical systems, the main task is to find an exact or approximate solution of the chemical master equation (CME) that satisfies certain constraints with respect to computation time and accuracy. While Brownian motion simulations of single molecules are often too time consuming to represent the mesoscopic level, the classical Gillespie algorithm is a stochastically exact algorithm that provides satisfying results in the representation of calcium microdomains. Gillespie's algorithm can be approximated via the tau-leap method and the chemical Langevin equation (CLE). Both methods lead to a substantial acceleration in computation time and a relatively small decrease in accuracy. Elimination of the noise terms leads to the classical, deterministic reaction rate equations (RRE). For complex multiscale systems, hybrid simulations are increasingly proposed to combine the advantages of stochastic and deterministic algorithms. An often used exemplary cell type in this context are striated muscle cells (e.g., cardiac and skeletal muscle cells). The properties of these cells are well described and they express many common calcium-dependent signaling pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.
An adaptive bias - hybrid MD/kMC algorithm for protein folding and aggregation.
Peter, Emanuel K; Shea, Joan-Emma
2017-07-05
In this paper, we present a novel hybrid Molecular Dynamics/kinetic Monte Carlo (MD/kMC) algorithm and apply it to protein folding and aggregation in explicit solvent. The new algorithm uses a dynamical definition of biases throughout the MD component of the simulation, normalized in relation to the unbiased forces. The algorithm guarantees sampling of the underlying ensemble in dependency of one average linear coupling factor 〈α〉 τ . We test the validity of the kinetics in simulations of dialanine and compare dihedral transition kinetics with long-time MD-simulations. We find that for low 〈α〉 τ values, kinetics are in good quantitative agreement. In folding simulations of TrpCage and TrpZip4 in explicit solvent, we also find good quantitative agreement with experimental results and prior MD/kMC simulations. Finally, we apply our algorithm to study growth of the Alzheimer Amyloid Aβ 16-22 fibril by monomer addition. We observe two possible binding modes, one at the extremity of the fibril (elongation) and one on the surface of the fibril (lateral growth), on timescales ranging from ns to 8 μs.
NASA Astrophysics Data System (ADS)
Geneva, Nicholas; Wang, Lian-Ping
2015-11-01
In the past 25 years, the mesoscopic lattice Boltzmann method (LBM) has become an increasingly popular approach to simulate incompressible flows including turbulent flows. While LBM solves more solution variables compared to the conventional CFD approach based on the macroscopic Navier-Stokes equation, it also offers opportunities for more efficient parallelization. In this talk we will describe several different algorithms that have been developed over the past 10 plus years, which can be used to represent the two core steps of LBM, collision and streaming, more effectively than standard approaches. The application of these algorithms spans LBM simulations ranging from basic channel to particle laden flows. We will cover the essential detail on the implementation of each algorithm for simple 2D flows, to the challenges one faces when using a given algorithm for more complex simulations. The key is to explore the best use of data structure and cache memory. Two basic data structures will be discussed and the importance of effective data storage to maximize a CPU's cache will be addressed. The performance of a 3D turbulent channel flow simulation using these different algorithms and data structures will be compared along with important hardware related issues.
NASA Astrophysics Data System (ADS)
Arefi, Hadi H.; Yamamoto, Takeshi
2017-12-01
Conventional molecular-dynamics (cMD) simulation has a well-known limitation in accessible time and length scales, and thus various enhanced sampling techniques have been proposed to alleviate the problem. In this paper, we explore the utility of replica exchange with solute tempering (REST) (i.e., a variant of Hamiltonian replica exchange methods) to simulate the self-assembly of a supramolecular polymer in explicit solvent and compare the performance with temperature-based replica exchange MD (T-REMD) as well as cMD. As a test system, we consider a relatively simple all-atom model of supramolecular polymerization (namely, benzene-1,3,5-tricarboxamides in methylcyclohexane solvent). Our results show that both REST and T-REMD are able to predict highly ordered polymer structures with helical H-bonding patterns, in contrast to cMD which completely fails to obtain such a structure for the present model. At the same time, we have also experienced some technical challenge (i.e., aggregation-dispersion transition and the resulting bottleneck for replica traversal), which is illustrated numerically. Since the computational cost of REST scales more moderately than T-REMD, we expect that REST will be useful for studying the self-assembly of larger systems in solution with enhanced rearrangement of monomers.
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu
2014-12-01
High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.
Differential evolution-simulated annealing for multiple sequence alignment
NASA Astrophysics Data System (ADS)
Addawe, R. C.; Addawe, J. M.; Sueño, M. R. K.; Magadia, J. C.
2017-10-01
Multiple sequence alignments (MSA) are used in the analysis of molecular evolution and sequence structure relationships. In this paper, a hybrid algorithm, Differential Evolution - Simulated Annealing (DESA) is applied in optimizing multiple sequence alignments (MSAs) based on structural information, non-gaps percentage and totally conserved columns. DESA is a robust algorithm characterized by self-organization, mutation, crossover, and SA-like selection scheme of the strategy parameters. Here, the MSA problem is treated as a multi-objective optimization problem of the hybrid evolutionary algorithm, DESA. Thus, we name the algorithm as DESA-MSA. Simulated sequences and alignments were generated to evaluate the accuracy and efficiency of DESA-MSA using different indel sizes, sequence lengths, deletion rates and insertion rates. The proposed hybrid algorithm obtained acceptable solutions particularly for the MSA problem evaluated based on the three objectives.
An efficient hybrid method for stochastic reaction-diffusion biochemical systems with delay
NASA Astrophysics Data System (ADS)
Sayyidmousavi, Alireza; Ilie, Silvana
2017-12-01
Many chemical reactions, such as gene transcription and translation in living cells, need a certain time to finish once they are initiated. Simulating stochastic models of reaction-diffusion systems with delay can be computationally expensive. In the present paper, a novel hybrid algorithm is proposed to accelerate the stochastic simulation of delayed reaction-diffusion systems. The delayed reactions may be of consuming or non-consuming delay type. The algorithm is designed for moderately stiff systems in which the events can be partitioned into slow and fast subsets according to their propensities. The proposed algorithm is applied to three benchmark problems and the results are compared with those of the delayed Inhomogeneous Stochastic Simulation Algorithm. The numerical results show that the new hybrid algorithm achieves considerable speed-up in the run time and very good accuracy.
A strategy for quantum algorithm design assisted by machine learning
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung
2014-07-01
We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.
2015-01-01
Procedure. The simulated annealing (SA) algorithm is a well-known local search metaheuristic used to address discrete, continuous, and multiobjective...design of experiments (DOE) to tune the parameters of the optimiza- tion algorithm . Section 5 shows the results of the case study. Finally, concluding... metaheuristic . The proposed method is broken down into two phases. Phase I consists of a Monte Carlo simulation to obtain the simulated percentage of failure
NASA Astrophysics Data System (ADS)
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)
Playing by the rules? Phenotypic adaptation to temperate environments in an American marsupial
Harrigan, Ryan J.; Wayne, Robert K.
2018-01-01
Phenotypic variation along environmental gradients can provide evidence suggesting local adaptation has shaped observed morphological disparities. These differences, in traits such as body and extremity size, as well as skin and coat pigmentation, may affect the overall fitness of individuals in their environments. The Virginia opossum (Didelphis virginiana) is a marsupial that shows phenotypic variation across its range, one that has recently expanded into temperate environments. It is unknown, however, whether the variation observed in the species fits adaptive ecogeographic patterns, or if phenotypic change is associated with any environmental factors. Using phenotypic measurements of over 300 museum specimens of Virginia opossum, collected throughout its distribution range, we applied regression analysis to determine if phenotypes change along a latitudinal gradient. Then, using predictors from remote-sensing databases and a random forest algorithm, we tested environmental models to find the most important variables driving the phenotypic variation. We found that despite the recent expansion into temperate environments, the phenotypic variation in the Virginia opossum follows a latitudinal gradient fitting three adaptive ecogeographic patterns codified under Bergmann’s, Allen’s and Gloger’s rules. Temperature seasonality was an important predictor of body size variation, with larger opossums occurring at high latitudes with more seasonal environments. Annual mean temperature predicted important variation in extremity size, with smaller extremities found in northern populations. Finally, we found that precipitation and temperature seasonality as well as low temperatures were strong environmental predictors of skin and coat pigmentation variation; darker opossums are distributed at low latitudes in warmer environments with higher precipitation seasonality. These results indicate that the adaptive mechanisms underlying the variation in body size, extremity size and pigmentation are related to the resource seasonality, heat conservation, and pathogen-resistance hypotheses, respectively. Our findings suggest that marsupials may be highly susceptible to environmental changes, and in the case of the Virginia opossum, the drastic phenotypic evolution in northern populations may have arisen rapidly, facilitating the colonization of seasonal and colder habitats of temperate North America. PMID:29607255
An algorithm for simulating fracture of cohesive-frictional materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nukala, Phani K; Sampath, Rahul S; Barai, Pallab
Fracture of disordered frictional granular materials is dominated by interfacial failure response that is characterized by de-cohesion followed by frictional sliding response. To capture such an interfacial failure response, we introduce a cohesive-friction random fuse model (CFRFM), wherein the cohesive response of the interface is represented by a linear stress-strain response until a failure threshold, which is then followed by a constant response at a threshold lower than the initial failure threshold to represent the interfacial frictional sliding mechanism. This paper presents an efficient algorithm for simulating fracture of such disordered frictional granular materials using the CFRFM. We note that,more » when applied to perfectly plastic disordered materials, our algorithm is both theoretically and numerically equivalent to the traditional tangent algorithm (Roux and Hansen 1992 J. Physique II 2 1007) used for such simulations. However, the algorithm is general and is capable of modeling discontinuous interfacial response. Our numerical simulations using the algorithm indicate that the local and global roughness exponents ({zeta}{sub loc} and {zeta}, respectively) of the fracture surface are equal to each other, and the two-dimensional crack roughness exponent is estimated to be {zeta}{sub loc} = {zeta} = 0.69 {+-} 0.03.« less
Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*
NASA Astrophysics Data System (ADS)
Xiang, LI
In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.
Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Gatski, Thomas B.
1997-01-01
A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.
NASA Technical Reports Server (NTRS)
Merrill, W. C.; Delaat, J. C.
1986-01-01
An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.
Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.
Serebrinsky, Santiago A
2011-03-01
We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.
Hanya, Goro; Tsuji, Yamato; Grueter, Cyril C
2013-04-01
In order to understand the ecological adaptations of primates to survive in temperate forests, we need to know the general patterns of plant phenology in temperate and tropical forests. Comparative analyses have been employed to investigate general trends in the seasonality and abundance of fruit and young leaves in tropical and temperate forests. Previous studies have shown that (1) fruit fall biomass in temperate forest is lower than in tropical forest, (2) non-fleshy species, in particular acorns, comprise the majority of the fruit biomass in temperate forest, (3) the duration of the fruiting season is shorter in temperate forest, and (4) the fruiting peak occurs in autumn in most temperate forests. Through our comparative analyses of the fruiting and flushing phenology between Asian temperate and tropical forests, we revealed that (1) fruiting is more annually periodic (the pattern in one year is similar to that seen in the next year) in temperate forest in terms of the number of fruiting species or trees, (2) there is no consistent difference in interannual variations in fruiting between temperate and tropical forests, although some oak-dominated temperate forests exhibit extremely large interannual variations in fruiting, (3) the timing of the flushing peak is predictable (in spring and early summer), and (4) the duration of the flushing season is shorter. The flushing season in temperate forests (17-28 % of that in tropical forests) was quite limited, even compared to the fruiting season (68 %). These results imply that temperate primates need to survive a long period of scarcity of young leaves and fruits, but the timing is predictable. Therefore, a dependence on low-quality foods, such as mature leaves, buds, bark, and lichens, would be indispensable for temperate primates. Due to the high predictability of the timing of fruiting and flushing in temperate forests, fat accumulation during the fruit-abundant period and fat metabolization during the subsequent fruit-scarce period can be an effective strategy to survive the lean period (winter).
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Estimating solar radiation for plant simulation models
NASA Technical Reports Server (NTRS)
Hodges, T.; French, V.; Leduc, S.
1985-01-01
Five algorithms producing daily solar radiation surrogates using daily temperatures and rainfall were evaluated using measured solar radiation data for seven U.S. locations. The algorithms were compared both in terms of accuracy of daily solar radiation estimates and terms of response when used in a plant growth simulation model (CERES-wheat). Requirements for accuracy of solar radiation for plant growth simulation models are discussed. One algorithm is recommended as being best suited for use in these models when neither measured nor satellite estimated solar radiation values are available.
Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa
2010-02-21
We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.
NASA Astrophysics Data System (ADS)
Srivastava, Prashant K., ,, Dr.; O'Neill, Peggy, ,, Dr.
2014-05-01
Soil moisture is an important element for weather and climate prediction, hydrological sciences, and applications. Hence, measurements of this hydrologic variable are required to improve our understanding of hydrological processes, ecosystem functions, and the linkages between the Earth's water, energy, and carbon cycles (Srivastava et al. 2013). The retrieval of soil moisture depends not only on parameterizations in the retrieval algorithm but also on the soil dielectric mixing models used (Behari 2005). Although a number of soil dielectric mixing models have been developed, testing these models for soil moisture retrieval has still not been fully explored, especially with SMAP-like simulators. The main objective of this work focuses on testing different dielectric models for soil moisture retrieval using the Combined Radar/Radiometer (ComRAD) ground-based L-band simulator developed jointly by NASA/GSFC and George Washington University (O'Neill et al., 2006). The ComRAD system was deployed during a field experiment in 2012 in order to provide long active/passive measurements of two crops under controlled conditions during an entire growing season. L-band passive data were acquired at a look angle of 40 degree from nadir at both horizontal & vertical polarization. Currently, there are many dielectric models available for soil moisture retrieval; however, four dielectric models (Mironov, Dobson, Wang & Schmugge and Hallikainen) were tested here and found to be promising for soil moisture retrieval (some with higher performances). All the above-mentioned dielectric models were integrated with Single Channel Algorithms using H (SCA-H) and V (SCA-V) polarizations for the soil moisture retrievals. All the ground-based observations were collected from test site-United States Department of Agriculture (USDA) OPE3, located a few miles away from NASA GSFC. Ground truth data were collected using a theta probe and in situ sensors which were then used for validation. Analysis indicated a higher performance in terms of soil moisture retrieval accuracy for the Mironov dielectric model (RMSE of 0.035 m3/m3), followed by Dobson, Wang & Schmugge, and Hallikainen. This analysis indicates that Mironov dielectric model is promising for passive-only microwave soil moisture retrieval and could be a useful choice for SMAP satellite soil moisture retrieval. Keywords: Dielectric models; Single Channel Algorithm, Combined Radar/Radiometer, Soil moisture; L band References: Behari, J. (2005). Dielectric Behavior of Soil (pp. 22-40). Springer Netherlands O'Neill, P. E., Lang, R. H., Kurum, M., Utku, C., & Carver, K. R. (2006), Multi-Sensor Microwave Soil Moisture Remote Sensing: NASA's Combined Radar/Radiometer (ComRAD) System. In IEEE MicroRad, 2006 (pp. 50-54). IEEE. Srivastava, P. K., Han, D., Rico Ramirez, M. A., & Islam, T. (2013), Appraisal of SMOS soil moisture at a catchment scale in a temperate maritime climate. Journal of Hydrology, 498, 292-304. USDA OPE3 web site at http://www.ars.usda.gov/Research/.
NASA Technical Reports Server (NTRS)
Joiner, J.; Vasilkov, A. P.; Gupta, Pawan; Bhartia, P. K.; Veefkind, Pepijn; Sneep, Maarten; deHaan, Johan; Polonsky, Igor; Spurr, Robert
2011-01-01
We have developed a relatively simple scheme for simulating retrieved cloud optical centroid pressures (OCP) from satellite solar backscatter observations. We have compared simulator results with those from more detailed retrieval simulators that more fully account for the complex radiative transfer in a cloudy atmosphere. We used this fast simulator to conduct a comprehensive evaluation of cloud OCPs from the two OMI algorithms using collocated data from CloudSat and Aqua MODIS, a unique situation afforded by the A-train formation of satellites. We find that both OMI algorithms perform reasonably well and that the two algorithms agree better with each other than either does with the collocated CloudSat data. This indicates that patchy snow/ice, cloud 3D, and aerosol effects not simulated with the CloudSat data are affecting both algorithms similarly. We note that the collocation with CloudSat occurs mainly on the East side of OMI's swath. Therefore, we are not able to address cross-track biases in OMI cloud OCP retrievals. Our fast simulator may also be used to simulate cloud OCP from output generated by general circulation models (GCM) with appropriate account of cloud overlap. We have implemented such a scheme and plan to compare OMI data with GCM output in the near future.
Analysis of estimation algorithms for CDTI and CAS applications
NASA Technical Reports Server (NTRS)
Goka, T.
1985-01-01
Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.
Fast algorithms for chiral fermions in 2 dimensions
NASA Astrophysics Data System (ADS)
Hyka (Xhako), Dafina; Osmanaj (Zeqirllari), Rudina
2018-03-01
In lattice QCD simulations the formulation of the theory in lattice should be chiral in order that symmetry breaking happens dynamically from interactions. In order to guarantee this symmetry on the lattice one uses overlap and domain wall fermions. On the other hand high computational cost of lattice QCD simulations with overlap or domain wall fermions remains a major obstacle of research in the field of elementary particles. We have developed the preconditioned GMRESR algorithm as fast inverting algorithm for chiral fermions in U(1) lattice gauge theory. In this algorithm we used the geometric multigrid idea along the extra dimension.The main result of this work is that the preconditioned GMRESR is capable to accelerate the convergence 2 to 12 times faster than the other optimal algorithms (SHUMR) for different coupling constant and lattice 32x32. Also, in this paper we tested it for larger lattice size 64x64. From the results of simulations we can see that our algorithm is faster than SHUMR. This is a very promising result that this algorithm can be adapted also in 4 dimension.
User's instructions for the whole-body algorithms
NASA Technical Reports Server (NTRS)
Grounds, D. J.; Fitzjerrell, D. G.; Leonard, J. I.; Marks, V. J.
1975-01-01
The design of an algorithm that provides for the simulation of long and short term biological stresses is reported. The physiological responses of models representing circulatory, respiratory, cardiovascular, and thermoregulatory systems during space flight simulation are described.
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior. PMID:26000011
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
A fast parallel clustering algorithm for molecular simulation trajectories.
Zhao, Yutong; Sheong, Fu Kit; Sun, Jian; Sander, Pedro; Huang, Xuhui
2013-01-15
We implemented a GPU-powered parallel k-centers algorithm to perform clustering on the conformations of molecular dynamics (MD) simulations. The algorithm is up to two orders of magnitude faster than the CPU implementation. We tested our algorithm on four protein MD simulation datasets ranging from the small Alanine Dipeptide to a 370-residue Maltose Binding Protein (MBP). It is capable of grouping 250,000 conformations of the MBP into 4000 clusters within 40 seconds. To achieve this, we effectively parallelized the code on the GPU and utilize the triangle inequality of metric spaces. Furthermore, the algorithm's running time is linear with respect to the number of cluster centers. In addition, we found the triangle inequality to be less effective in higher dimensions and provide a mathematical rationale. Finally, using Alanine Dipeptide as an example, we show a strong correlation between cluster populations resulting from the k-centers algorithm and the underlying density. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.
μ-tempered metadynamics: Artifact independent convergence times for wide hills
NASA Astrophysics Data System (ADS)
Dickson, Bradley M.
2015-12-01
Recent analysis of well-tempered metadynamics (WTmetaD) showed that it converges without mollification artifacts in the bias potential. Here, we explore how metadynamics heals mollification artifacts, how healing impacts convergence time, and whether alternative temperings may be used to improve efficiency. We introduce "μ-tempered" metadynamics as a simple tempering scheme, inspired by a related mollified adaptive biasing potential, that results in artifact independent convergence of the free energy estimate. We use a toy model to examine the role of artifacts in WTmetaD and solvated alanine dipeptide to compare the well-tempered and μ-tempered frameworks demonstrating fast convergence for hill widths as large as 60∘ for μTmetaD.
μ-tempered metadynamics: Artifact independent convergence times for wide hills.
Dickson, Bradley M
2015-12-21
Recent analysis of well-tempered metadynamics (WTmetaD) showed that it converges without mollification artifacts in the bias potential. Here, we explore how metadynamics heals mollification artifacts, how healing impacts convergence time, and whether alternative temperings may be used to improve efficiency. We introduce "μ-tempered" metadynamics as a simple tempering scheme, inspired by a related mollified adaptive biasing potential, that results in artifact independent convergence of the free energy estimate. We use a toy model to examine the role of artifacts in WTmetaD and solvated alanine dipeptide to compare the well-tempered and μ-tempered frameworks demonstrating fast convergence for hill widths as large as 60(∘) for μTmetaD.
NOVA: A new multi-level logic simulator
NASA Technical Reports Server (NTRS)
Miles, L.; Prins, P.; Cameron, K.; Shovic, J.
1990-01-01
A new logic simulator that was developed at the NASA Space Engineering Research Center for VLSI Design was described. The simulator is multi-level, being able to simulate from the switch level through the functional model level. NOVA is currently in the Beta test phase and was used to simulate chips designed for the NASA Space Station and the Explorer missions. A new algorithm was devised to simulate bi-directional pass transistors and a preliminary version of the algorithm is presented. The usage of functional models in NOVA is also described and performance figures are presented.
2017-01-01
The protein mediated hydrolysis of nucleoside triphosphates such as ATP or GTP is one of the most important and challenging biochemical reactions in nature. The chemical environment (water structure, catalytic metal, and amino acid residues) adjacent to the hydrolysis site contains hundreds of atoms, usually greatly limiting the amount of the free energy sampling that one can achieve from computationally demanding electronic structure calculations such as QM/MM simulations. Therefore, the combination of QM/MM molecular dynamics with the recently developed transition-tempered metadynamics (TTMetaD), an enhanced sampling method that can provide a high-quality free energy estimate at an early stage in a simulation, is an ideal approach to address the biomolecular nucleoside triphosphate hydrolysis problem. In this work the ATP hydrolysis process in monomeric and filamentous actin is studied as an example application of the combined methodology. The performance of TTMetaD in these demanding QM/MM simulations is compared with that of the more conventional well-tempered metadynamics (WTMetaD). Our results show that TTMetaD exhibits much better exploration of the hydrolysis reaction free energy surface in two key collective variables (CVs) during the early stages of the QM/MM simulation than does WTMetaD. The TTMetaD simulations also reveal that a key third degree of freedom, the O–H bond-breaking and proton transfer from the lytic water, must be biased for TTMetaD to converge fully. To perturb the NTP hydrolysis dynamics to the least extent and to properly focus the MetaD free energy sampling, we also adopt here the recently developed metabasin metadynamics (MBMetaD) to construct a self-limiting bias potential that only applies to the lytic water after its nucleophilic attack of the phosphate of ATP. With these new, state-of-the-art enhanced sampling metadynamics techniques, we present an effective and accurate computational strategy for combining QM/MM molecular dynamics simulation with free energy sampling methodology, including a means to analyze the convergence of the calculations through robust numerical criteria. PMID:28345907
Sun, Rui; Sode, Olaseni; Dama, James F; Voth, Gregory A
2017-05-09
The protein mediated hydrolysis of nucleoside triphosphates such as ATP or GTP is one of the most important and challenging biochemical reactions in nature. The chemical environment (water structure, catalytic metal, and amino acid residues) adjacent to the hydrolysis site contains hundreds of atoms, usually greatly limiting the amount of the free energy sampling that one can achieve from computationally demanding electronic structure calculations such as QM/MM simulations. Therefore, the combination of QM/MM molecular dynamics with the recently developed transition-tempered metadynamics (TTMetaD), an enhanced sampling method that can provide a high-quality free energy estimate at an early stage in a simulation, is an ideal approach to address the biomolecular nucleoside triphosphate hydrolysis problem. In this work the ATP hydrolysis process in monomeric and filamentous actin is studied as an example application of the combined methodology. The performance of TTMetaD in these demanding QM/MM simulations is compared with that of the more conventional well-tempered metadynamics (WTMetaD). Our results show that TTMetaD exhibits much better exploration of the hydrolysis reaction free energy surface in two key collective variables (CVs) during the early stages of the QM/MM simulation than does WTMetaD. The TTMetaD simulations also reveal that a key third degree of freedom, the O-H bond-breaking and proton transfer from the lytic water, must be biased for TTMetaD to converge fully. To perturb the NTP hydrolysis dynamics to the least extent and to properly focus the MetaD free energy sampling, we also adopt here the recently developed metabasin metadynamics (MBMetaD) to construct a self-limiting bias potential that only applies to the lytic water after its nucleophilic attack of the phosphate of ATP. With these new, state-of-the-art enhanced sampling metadynamics techniques, we present an effective and accurate computational strategy for combining QM/MM molecular dynamics simulation with free energy sampling methodology, including a means to analyze the convergence of the calculations through robust numerical criteria.
Li, Dejun; Lanigan, Gary; Humphreys, James
2011-01-01
There is uncertainty about the potential reduction of soil nitrous oxide (N2O) emission when fertilizer nitrogen (FN) is partially or completely replaced by biological N fixation (BNF) in temperate grassland. The objectives of this study were to 1) investigate the changes in N2O emissions when BNF is used to replace FN in permanent grassland, and 2) evaluate the applicability of the process-based model DNDC to simulate N2O emissions from Irish grasslands. Three grazing treatments were: (i) ryegrass (Lolium perenne) grasslands receiving 226 kg FN ha−1 yr−1 (GG+FN), (ii) ryegrass/white clover (Trifolium repens) grasslands receiving 58 kg FN ha−1 yr−1 (GWC+FN) applied in spring, and (iii) ryegrass/white clover grasslands receiving no FN (GWC-FN). Two background treatments, un-grazed swards with ryegrass only (G–B) or ryegrass/white clover (WC–B), did not receive slurry or FN and the herbage was harvested by mowing. There was no significant difference in annual N2O emissions between G–B (2.38±0.12 kg N ha−1 yr−1 (mean±SE)) and WC-B (2.45±0.85 kg N ha−1 yr−1), indicating that N2O emission due to BNF itself and clover residual decomposition from permanent ryegrass/clover grassland was negligible. N2O emissions were 7.82±1.67, 6.35±1.14 and 6.54±1.70 kg N ha−1 yr−1, respectively, from GG+FN, GWC+FN and GWC-FN. N2O fluxes simulated by DNDC agreed well with the measured values with significant correlation between simulated and measured daily fluxes for the three grazing treatments, but the simulation did not agree very well for the background treatments. DNDC overestimated annual emission by 61% for GG+FN, and underestimated by 45% for GWC-FN, but simulated very well for GWC+FN. Both the measured and simulated results supported that there was a clear reduction of N2O emissions when FN was replaced by BNF. PMID:22028829
Relation of Parallel Discrete Event Simulation algorithms with physical models
NASA Astrophysics Data System (ADS)
Shchur, L. N.; Shchur, L. V.
2015-09-01
We extend concept of local simulation times in parallel discrete event simulation (PDES) in order to take into account architecture of the current hardware and software in high-performance computing. We shortly review previous research on the mapping of PDES on physical problems, and emphasise how physical results may help to predict parallel algorithms behaviour.
Towards full-Braginskii implicit extended MHD
NASA Astrophysics Data System (ADS)
Chacon, Luis
2009-05-01
Recently, viable algorithms have been proposed for the scalable, fully-implicit temporal integration of 3D resistive MHD and cold-ion extended MHD models. While significant, these achievements must be tempered by the fact that such models lack predictive capabilities in regimes of interest for magnetic fusion. Short of including kinetic closures, a natural evolution path towards predictability starts by considering additional terms as described in Braginskii's fluid closures in the collisional regime. Here, we focus on the inclusion of two fundamental elements of relevance for fusion plasmas: anisotropic parallel electron transport, and warm-ion physics (i.e., ion finite Larmor radius effects, included via gyroviscosity). Both these elements introduce significant numerical difficulties, due to the strong anisotropy in the former, and the presence of dispersive waves in the latter. In this presentation, we will discuss progress in our fully implicit algorithmic formulation towards the inclusion of both these elements. L. Chac'on, Phys. Plasmas, 15, 056103 (2008) L. Chac'on, J. Physics: Conf. Series, 125, 012041 (2008)
Development of an algorithm to plan and simulate a new interventional procedure.
Fujita, Buntaro; Kütting, Maximilian; Scholtz, Smita; Utzenrath, Marc; Hakim-Meibodi, Kavous; Paluszkiewicz, Lech; Schmitz, Christoph; Börgermann, Jochen; Gummert, Jan; Steinseifer, Ulrich; Ensminger, Stephan
2015-07-01
The number of implanted biological valves for treatment of valvular heart disease is growing and a percentage of these patients will eventually undergo a transcatheter valve-in-valve (ViV) procedure. Some of these patients will represent challenging cases. The aim of this study was to develop a feasible algorithm to plan and in vitro simulate a new interventional procedure to improve patient outcome. In addition to standard diagnostic routine, our algorithm includes 3D printing of the annulus, hydrodynamic measurements and high-speed analysis of leaflet kinematics after simulation of the procedure in different prosthesis positions as well as X-ray imaging of the most suitable valve position to create a 'blueprint' for the patient procedure. This algorithm was developed for a patient with a degenerated Perceval aortic sutureless prosthesis requiring a ViV procedure. Different ViV procedures were assessed in the algorithm and based on these results the best option for the patient was chosen. The actual procedure went exactly as planned with help of this algorithm. Here we have developed a new technically feasible algorithm simulating important aspects of a novel interventional procedure prior to the actual procedure. This algorithm can be applied to virtually all patients requiring a novel interventional procedure to help identify risks and find optimal parameters for prosthesis selection and placement in order to maximize safety for the patient. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.
Bhandarkar, S M; Chirravuri, S; Arnold, J
1996-01-01
Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bylaska, Eric J., E-mail: Eric.Bylaska@pnnl.gov; Weare, Jonathan Q., E-mail: weare@uchicago.edu; Weare, John H., E-mail: jweare@ucsd.edu
2013-08-21
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time t{sub i} (trajectory positions and velocities x{sub i} = (r{sub i}, v{sub i})) to time t{sub i+1} (x{sub i+1}) by x{sub i+1} = f{sub i}(x{sub i}), the dynamics problem spanning an interval from t{sub 0}…t{sub M} can be transformed into a root finding problem, F(X) = [x{sub i} − f(x{sub (i−1})]{sub i} {sub =1,M} = 0, for themore » trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H{sub 2}O AIMD simulation at the MP2 level. The maximum speedup ((serial execution time)/(parallel execution time) ) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H{sub 2}O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.« less
Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.
NASA Astrophysics Data System (ADS)
Elliott, William Dewey
1995-01-01
A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over several simulation timesteps. One MD application described here highlights the utility of including long range contributions to Lennard-Jones potential in constant pressure simulations. Another application shows the time dependence of long range forces in a multiple time step MD simulation.
Predicting Pleistocene climate from vegetation
NASA Astrophysics Data System (ADS)
Loehle, C.
2006-10-01
Climates at the Last Glacial Maximum have been inferred from fossil pollen assemblages, but these inferred climates are colder than those produced by climate simulations. Biogeographic evidence also argues against these inferred cold climates. The recolonization of glaciated zones in eastern North America following the last ice age produced distinct biogeographic patterns. It has been assumed that a wide zone south of the ice was tundra or boreal parkland (Boreal-Parkland Zone or BPZ), which would have been recolonized from southern refugia as the ice melted, but the patterns in this zone differ from those in the glaciated zone, which creates a major biogeographic anomaly. In the glacial zone, there are few endemics but in the BPZ there are many across multiple taxa. In the glacial zone, there are the expected gradients of genetic diversity with distance from the ice-free zone, but no evidence of this is found in the BPZ. Many races and related species exist in the BPZ which would have merged or hybridized if confined to the same refugia. Evidence for distinct southern refugia for most temperate species is lacking. Extinctions of temperate flora were rare. The interpretation of spruce as a boreal climate indicator may be mistaken over much of the region if the spruce was actually an extinct temperate species. All of these anomalies call into question the concept that climates in the zone south of the ice were very cold or that temperate species had to migrate far to the south. Similar anomalies exist in Europe and on tropical mountains. An alternate hypothesis is that low CO2 levels gave an advantage to pine and spruce, which are the dominant trees in the BPZ, and to herbaceous species over trees, which also fits the observed pattern. Most temperate species could have survived across their current ranges at lower abundance by retreating to moist microsites. These would be microrefugia not easily detected by pollen records, especially if most species became rare. These results mean that climate reconstruction based on terrestrial plant indicators will not be valid for periods with markedly different CO2 levels.
NASA Astrophysics Data System (ADS)
Kharchenko, K. S.; Vitkovskii, I. L.
2014-02-01
Performance of the secondary coolant circuit rupture algorithm in different operating modes of the Novovoronezh NPP Unit 5 is considered by carrying out studies on a full-scale training simulator. The revealed shortcomings of the algorithm causing excessive actuations of the protection are pointed out, and recommendations for removing them are outlined.
Fast stochastic algorithm for simulating evolutionary population dynamics
NASA Astrophysics Data System (ADS)
Tsimring, Lev; Hasty, Jeff; Mather, William
2012-02-01
Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.
Splitting algorithm for numerical simulation of Li-ion battery electrochemical processes
NASA Astrophysics Data System (ADS)
Iliev, Oleg; Nikiforova, Marina A.; Semenov, Yuri V.; Zakharov, Petr E.
2017-11-01
In this paper we present a splitting algorithm for a numerical simulation of Li-ion battery electrochemical processes. Liion battery consists of three domains: anode, cathode and electrolyte. Mathematical model of electrochemical processes is described on a microscopic scale, and contains nonlinear equations for concentration and potential in each domain. On the interface of electrodes and electrolyte there are the Lithium ions intercalation and deintercalation processes, which are described by Butler-Volmer nonlinear equation. To approximate in spatial coordinates we use finite element methods with discontinues Galerkin elements. To simplify numerical simulations we develop the splitting algorithm, which split the original problem into three independent subproblems. We investigate the numerical convergence of the algorithm on 2D model problem.
Classical simulation of infinite-size quantum lattice systems in two spatial dimensions.
Jordan, J; Orús, R; Vidal, G; Verstraete, F; Cirac, J I
2008-12-19
We present an algorithm to simulate two-dimensional quantum lattice systems in the thermodynamic limit. Our approach builds on the projected entangled-pair state algorithm for finite lattice systems [F. Verstraete and J. I. Cirac, arxiv:cond-mat/0407066] and the infinite time-evolving block decimation algorithm for infinite one-dimensional lattice systems [G. Vidal, Phys. Rev. Lett. 98, 070201 (2007)10.1103/PhysRevLett.98.070201]. The present algorithm allows for the computation of the ground state and the simulation of time evolution in infinite two-dimensional systems that are invariant under translations. We demonstrate its performance by obtaining the ground state of the quantum Ising model and analyzing its second order quantum phase transition.
Pernice, W H; Payne, F P; Gallagher, D F
2007-09-03
We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.
Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, M D; Cole, S; Frenk, C S
2011-02-14
We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-01-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008
Physical environment virtualization for human activities recognition
NASA Astrophysics Data System (ADS)
Poshtkar, Azin; Elangovan, Vinayak; Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen
2015-05-01
Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery data for human-vehicle activity recognition under different operational contexts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Y.; Collaborative Innovation Center for Advanced Ship and Deep-Sea Exploration, Shanghai 200240; Li, W., E-mail: weilee@sjtu.edu.cn
Low temperature tempering is important in improving the mechanical properties of steels. In this study, the thermoelectric power method was employed to investigate carbon segregation during low temperature tempering ranging from 110 °C to 170 °C of a medium carbon alloyed steel, combined with micro-hardness, transmission electron microscopy and atom probe tomography. Evolution of carbon dissolution from martensite and segregation to grain boundaries/interfaces and dislocations were investigated for different tempering conditions. Carbon concentration variation was quantified from 0.33 wt.% in quenching sample to 0.15 wt.% after long time tempering. The kinetic of carbon diffusion during tempering process was discussed throughmore » Johnson-Mehl-Avrami equation. - Highlights: • The thermoelectric power (TEP) was employed to investigate the low temperature tempering of a medium carbon alloyed steel. • Evolution of carbon dissolution was investigated for different tempering conditions. • Carbon concentration variation was quantified from 0.33 wt.% in quenching sample to 0.15 wt.% after long time tempering.« less
NASA Astrophysics Data System (ADS)
Lu, Meng-Chang; Huang, -Chuan, Jr.; Chang, Chung-Te; Shih, Yu-Ting; Lin, Teng-Chiu
2016-04-01
The riverine DIN is a crucial indicator for eutrophication in river network. The riverine DIN export in Taiwan is featured by the extremely high yield, ~3800 kg-N km-2yr-1, nearly 20-fold than the global average, showing the interesting terrestrial N process yet rarely documented. In this study we collected the DIN samples in rainwater, soil water, and stream water in a mountainous forest watershed, FuShan experimental forest watershed 1 (WS1) which is a natural broadleaf forest without human activities. Based on the intensive observations, we applied the INCA-N to simulate the riverine DIN response and thus estimate the terrestrial N processes in a global synthesis. The result showed that both discharge and DIN yield were simulated well with the average Nash-Sutcliffe efficiency coefficient of 0.83 and 0.76 , respectively. Among all N processes, N uptake, mineralization, nitrification, denitrfication, and immobilization are significantly positive correlated with soil moisture (R2>0.99), which indicates that soil moisture greatly influences N cycle processes. The average rate of mineralization and nitrification in wet years are consistent with documented values, whereas the rates in dry years are lower than the observations. Despite the high nitrification rate, the secondary forest may uptake abundant N indicating the plant uptake, which responds for removing considerable nitrate, is a controlling factor in forest ecosystem. Our simulated denitrification rate falls between the documented rates of temperate forest and agricultural area, and that may be affected by the high N-deposition in Taiwan. Simulated in-stream denitrification rate is less than 10% of the rate in soil, and is a little lower than that in temperate forest. This preliminary simulation provides an insightful guide to establish the monitoring programme and improve the understanding of N cycle in subtropical.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-04-25
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Applying FastSLAM to Articulated Rovers
NASA Astrophysics Data System (ADS)
Hewitt, Robert Alexander
This thesis presents the navigation algorithms designed for use on Kapvik, a 30 kg planetary micro-rover built for the Canadian Space Agency; the simulations used to test the algorithm; and novel techniques for terrain classification using Kapvik's LIDAR (Light Detection And Ranging) sensor. Kapvik implements a six-wheeled, skid-steered, rocker-bogie mobility system. This warrants a more complicated kinematic model for navigation than a typical 4-wheel differential drive system. The design of a 3D navigation algorithm is presented that includes nonlinear Kalman filtering and Simultaneous Localization and Mapping (SLAM). A neural network for terrain classification is used to improve navigation performance. Simulation is used to train the neural network and validate the navigation algorithms. Real world tests of the terrain classification algorithm validate the use of simulation for training and the improvement to SLAM through the reduction of extraneous LIDAR measurements in each scan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piao, J; PLA 302 Hospital, Beijing; Xu, S
2016-06-15
Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less
Mononen, Mika E.; Tanska, Petri; Isaksson, Hanna; Korhonen, Rami K.
2016-01-01
We present a novel algorithm combined with computational modeling to simulate the development of knee osteoarthritis. The degeneration algorithm was based on excessive and cumulatively accumulated stresses within knee joint cartilage during physiological gait loading. In the algorithm, the collagen network stiffness of cartilage was reduced iteratively if excessive maximum principal stresses were observed. The developed algorithm was tested and validated against experimental baseline and 4-year follow-up Kellgren-Lawrence grades, indicating different levels of cartilage degeneration at the tibiofemoral contact region. Test groups consisted of normal weight and obese subjects with the same gender and similar age and height without osteoarthritic changes. The algorithm accurately simulated cartilage degeneration as compared to the Kellgren-Lawrence findings in the subject group with excess weight, while the healthy subject group’s joint remained intact. Furthermore, the developed algorithm followed the experimentally found trend of cartilage degeneration in the obese group (R2 = 0.95, p < 0.05; experiments vs. model), in which the rapid degeneration immediately after initiation of osteoarthritis (0–2 years, p < 0.001) was followed by a slow or negligible degeneration (2–4 years, p > 0.05). The proposed algorithm revealed a great potential to objectively simulate the progression of knee osteoarthritis. PMID:26906749
NASA Astrophysics Data System (ADS)
Bittner, S.; Priesack, E.
2012-04-01
We apply a functional-structural model of tree water flow to single old-growth trees in a temperate broad-leaved forest stand. Roots, stems and branches are represented by connected porous cylinder elements further divided into the inner heartwood cylinders surrounded by xylem and phloem. Xylem water flow is simulated by applying a non-linear Darcy flow in porous media driven by the water potential gradient according to the cohesion-tension theory. The flow model is based on physiological input parameters such as the hydraulic conductivity, stomatal response to leaf water potential and root water uptake capability and, thus, can reflect the different properties of tree species. The actual root water uptake is calculated using also a non-linear Darcy law based on the gradient between root xylem water potential and rhizosphere soil water potential and by the simulation of soil water flow applying Richards equation. A leaf stomatal conductance model is combined with the hydrological tree and soil water flow model and a spatially explicit three-dimensional canopy light model. The structure of the canopy and the tree architectures are derived by applying an automatic tree skeleton extraction algorithm from point clouds obtained by use of a terrestrial laser scanner allowing an explicit representation of the water flow path in the stem and branches. The high spatial resolution of the root and branch geometry and their connectivity makes the detailed modelling of the water use of single trees possible and allows for the analysis of the interaction between single trees and the influence of the canopy light regime (including different fractions of direct sunlight and diffuse skylight) on the simulated sap flow and transpiration. The model can be applied at various sites and to different tree species, enabling the up-scaling of the water usage of single trees to the total transpiration of mixed stands. Examples are given to reveal differences between diffuse- and ring-porous tree species and to simulate the diurnal dynamics of transpiration, stem sap flux, and root water uptake observed during the vegetation period in the year 2009.
Flexible Residential Smart Grid Simulation Framework
NASA Astrophysics Data System (ADS)
Xiang, Wang
Different scheduling and coordination algorithms controlling household appliances' operations can potentially lead to energy consumption reduction and/or load balancing in conjunction with different electricity pricing methods used in smart grid programs. In order to easily implement different algorithms and evaluate their efficiency against other ideas, a flexible simulation framework is desirable in both research and business fields. However, such a platform is currently lacking or underdeveloped. In this thesis, we provide a simulation framework to focus on demand side residential energy consumption coordination in response to different pricing methods. This simulation framework, equipped with an appliance consumption library using realistic values, aims to closely represent the average usage of different types of appliances. The simulation results of traditional usage yield close matching values compared to surveyed real life consumption records. Several sample coordination algorithms, pricing schemes, and communication scenarios are also implemented to illustrate the use of the simulation framework.
Multi-Algorithm Particle Simulations with Spatiocyte.
Arjunan, Satya N V; Takahashi, Koichi
2017-01-01
As quantitative biologists get more measurements of spatially regulated systems such as cell division and polarization, simulation of reaction and diffusion of proteins using the data is becoming increasingly relevant to uncover the mechanisms underlying the systems. Spatiocyte is a lattice-based stochastic particle simulator for biochemical reaction and diffusion processes. Simulations can be performed at single molecule and compartment spatial scales simultaneously. Molecules can diffuse and react in 1D (filament), 2D (membrane), and 3D (cytosol) compartments. The implications of crowded regions in the cell can be investigated because each diffusing molecule has spatial dimensions. Spatiocyte adopts multi-algorithm and multi-timescale frameworks to simulate models that simultaneously employ deterministic, stochastic, and particle reaction-diffusion algorithms. Comparison of light microscopy images to simulation snapshots is supported by Spatiocyte microscopy visualization and molecule tagging features. Spatiocyte is open-source software and is freely available at http://spatiocyte.org .
Maximum wind energy extraction strategies using power electronic converters
NASA Astrophysics Data System (ADS)
Wang, Quincy Qing
2003-10-01
This thesis focuses on maximum wind energy extraction strategies for achieving the highest energy output of variable speed wind turbine power generation systems. Power electronic converters and controls provide the basic platform to accomplish the research of this thesis in both hardware and software aspects. In order to send wind energy to a utility grid, a variable speed wind turbine requires a power electronic converter to convert a variable voltage variable frequency source into a fixed voltage fixed frequency supply. Generic single-phase and three-phase converter topologies, converter control methods for wind power generation, as well as the developed direct drive generator, are introduced in the thesis for establishing variable-speed wind energy conversion systems. Variable speed wind power generation system modeling and simulation are essential methods both for understanding the system behavior and for developing advanced system control strategies. Wind generation system components, including wind turbine, 1-phase IGBT inverter, 3-phase IGBT inverter, synchronous generator, and rectifier, are modeled in this thesis using MATLAB/SIMULINK. The simulation results have been verified by a commercial simulation software package, PSIM, and confirmed by field test results. Since the dynamic time constants for these individual models are much different, a creative approach has also been developed in this thesis to combine these models for entire wind power generation system simulation. An advanced maximum wind energy extraction strategy relies not only on proper system hardware design, but also on sophisticated software control algorithms. Based on literature review and computer simulation on wind turbine control algorithms, an intelligent maximum wind energy extraction control algorithm is proposed in this thesis. This algorithm has a unique on-line adaptation and optimization capability, which is able to achieve maximum wind energy conversion efficiency through continuously improving the performance of wind power generation systems. This algorithm is independent of wind power generation system characteristics, and does not need wind speed and turbine speed measurements. Therefore, it can be easily implemented into various wind energy generation systems with different turbine inertia and diverse system hardware environments. In addition to the detailed description of the proposed algorithm, computer simulation results are presented in the thesis to demonstrate the advantage of this algorithm. As a final confirmation of the algorithm feasibility, the algorithm has been implemented inside a single-phase IGBT inverter, and tested with a wind simulator system in research laboratory. Test results were found consistent with the simulation results. (Abstract shortened by UMI.)
Surgical motion characterization in simulated needle insertion procedures
NASA Astrophysics Data System (ADS)
Holden, Matthew S.; Ungi, Tamas; Sargent, Derek; McGraw, Robert C.; Fichtinger, Gabor
2012-02-01
PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm). For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as opposed to a threshold-based algorithm for procedures involving translational noise.
An assessment of 'shuffle algorithm' collision mechanics for particle simulations
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Boyd, Iain D.
1991-01-01
Among the algorithms for collision mechanics used at present, the 'shuffle algorithm' of Baganoff (McDonald and Baganoff, 1988; Baganoff and McDonald, 1990) not only allows efficient vectorization, but also discretizes the possible outcomes of a collision. To assess the applicability of the shuffle algorithm, a simulation was performed of flows in monoatomic gases and the calculated characteristics of shock waves was compared with those obtained using a commonly employed isotropic scattering law. It is shown that, in general, the shuffle algorithm adequately represents the collision mechanics in cases when the goal of calculations are mean profiles of density and temperature.
Chemistry of the Burning Surface
1993-10-12
simulated combustion and explo- the temperature is nonuniform along the filament length sion events. SUFKS V~V 100 IWAQ10 0 t 1 10 CABRAM OIRV...temperature. Ilee filament is slightly altered by the sample, the power results clearly show that it is the nonuniform temper%- dissipation is essentially...sample explosive and propellant material, was chosen because it is presnt on the filamenL Liquefaction of AMMO is illustrates the laIr amount of chemical
A maximum power point tracking algorithm for buoy-rope-drum wave energy converters
NASA Astrophysics Data System (ADS)
Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.
2016-08-01
The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.
Simulation of an enhanced TCAS 2 system in operation
NASA Technical Reports Server (NTRS)
Rojas, R. G.; Law, P.; Burnside, W. D.
1987-01-01
Described is a computer simulation of a Boeing 737 aircraft equipped with an enhanced Traffic and Collision Avoidance System (TCAS II). In particular, an algorithm is developed which permits the computer simulation of the tracking of a target airplane by a Boeing 373 which has a TCAS II array mounted on top of its fuselage. This algorithm has four main components: namely, the target path, the noise source, the alpha-beta filter, and threat detection. The implementation of each of these four components is described. Furthermore, the areas where the present algorithm needs to be improved are also mentioned.
Simulation optimization of PSA-threshold based prostate cancer screening policies
Zhang, Jingyu; Denton, Brian T.; Shah, Nilay D.; Inman, Brant A.
2013-01-01
We describe a simulation optimization method to design PSA screening policies based on expected quality adjusted life years (QALYs). Our method integrates a simulation model in a genetic algorithm which uses a probabilistic method for selection of the best policy. We present computational results about the efficiency of our algorithm. The best policy generated by our algorithm is compared to previously recommended screening policies. Using the policies determined by our model, we present evidence that patients should be screened more aggressively but for a shorter length of time than previously published guidelines recommend. PMID:22302420
A sweep algorithm for massively parallel simulation of circuit-switched networks
NASA Technical Reports Server (NTRS)
Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.
1992-01-01
A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.
Real-time failure control (SAFD)
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.
1990-01-01
The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.
Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson
2008-01-01
We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...
Expanded Processing Techniques for EMI Systems
2012-07-01
possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and mapping...possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and...54! Figure 4.25: Plots of simulated MetalMapper data for two oblate spheroidal targets
Memetic algorithms for de novo motif-finding in biomedical sequences.
Bi, Chengpeng
2012-09-01
The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary microRNA sequences. The memetic motif-finding algorithm is effectively designed and implemented, and its applications demonstrate it is not only time-efficient, but also exhibits excellent performance while compared with other popular algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.
2016-05-01
The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.
Hodgkins, Richard; Cooper, Richard; Tranter, Martyn; Wadham, Jemma
2013-07-26
[1] The drainage systems of polythermal glaciers play an important role in high-latitude hydrology, and are determinants of ice flow rate. Flow-recession analysis and linear-reservoir simulation of runoff time series are here used to evaluate seasonal and inter-annual variability in the drainage system of the polythermal Finsterwalderbreen, Svalbard, in 1999 and 2000. Linear-flow recessions are pervasive, with mean coefficients of a fast reservoir varying from 16 (1999) to 41 h (2000), and mean coefficients of an intermittent, slow reservoir varying from 54 (1999) to 114 h (2000). Drainage-system efficiency is greater overall in the first of the two seasons, the simplest explanation of which is more rapid depletion of the snow cover. Reservoir coefficients generally decline during each season (at 0.22 h d -1 in 1999 and 0.52 h d -1 in 2000), denoting an increase in drainage efficiency. However, coefficients do not exhibit a consistent relationship with discharge. Finsterwalderbreen therefore appears to behave as an intermediate case between temperate glaciers and other polythermal glaciers with smaller proportions of temperate ice. Linear-reservoir runoff simulations exhibit limited sensitivity to a relatively wide range of reservoir coefficients, although the use of fixed coefficients in a spatially lumped model can generate significant subseasonal error. At Finsterwalderbreen, an ice-marginal channel with the characteristics of a fast reservoir, and a subglacial upwelling with the characteristics of a slow reservoir, both route meltwater to the terminus. This suggests that drainage-system components of significantly contrasting efficiencies can coexist spatially and temporally at polythermal glaciers.
Hybrid algorithms for fuzzy reverse supply chain network design.
Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
NASA Astrophysics Data System (ADS)
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
NASA Astrophysics Data System (ADS)
Thieberger, P.; Gassner, D.; Hulsart, R.; Michnoff, R.; Miller, T.; Minty, M.; Sorrell, Z.; Bartnik, A.
2018-04-01
A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
A Parametric k-Means Algorithm
Tarpey, Thaddeus
2007-01-01
Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692
Thieberger, Peter; Gassner, D.; Hulsart, R.; ...
2018-04-25
Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less
System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft
NASA Technical Reports Server (NTRS)
Pullen, Samuel P.; Parkinson, Bradford W.
1994-01-01
This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thieberger, Peter; Gassner, D.; Hulsart, R.
Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less
Thieberger, P; Gassner, D; Hulsart, R; Michnoff, R; Miller, T; Minty, M; Sorrell, Z; Bartnik, A
2018-04-01
A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.
Marsili, Simone; Signorini, Giorgio Federico; Chelli, Riccardo; Marchi, Massimo; Procacci, Piero
2010-04-15
We present the new release of the ORAC engine (Procacci et al., Comput Chem 1997, 18, 1834), a FORTRAN suite to simulate complex biosystems at the atomistic level. The previous release of the ORAC code included multiple time steps integration, smooth particle mesh Ewald method, constant pressure and constant temperature simulations. The present release has been supplemented with the most advanced techniques for enhanced sampling in atomistic systems including replica exchange with solute tempering, metadynamics and steered molecular dynamics. All these computational technologies have been implemented for parallel architectures using the standard MPI communication protocol. ORAC is an open-source program distributed free of charge under the GNU general public license (GPL) at http://www.chim.unifi.it/orac. 2009 Wiley Periodicals, Inc.
UAV Mission Planning under Uncertainty
2006-06-01
algorithm , adapted from [13] . 57 4-5 Robust Optimization considers only a subset of the feasible region . 61 5-1 Overview of simulation with parameter...incorporates the robust optimization method suggested by Bertsimas and Sim [12], and is solved with a standard Branch- and-Cut algorithm . The chapter... algorithms , and the heuristic methods of Local Search methods and Simulated Annealing. With each method, we attempt to give a review of research that has
Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application
2016-02-26
Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in
NASA Astrophysics Data System (ADS)
Zhou, Pu; Wang, Xiaolin; Li, Xiao; Chen, Zilum; Xu, Xiaojun; Liu, Zejin
2009-10-01
Coherent summation of fibre laser beams, which can be scaled to a relatively large number of elements, is simulated by using the stochastic parallel gradient descent (SPGD) algorithm. The applicability of this algorithm for coherent summation is analysed and its optimisaton parameters and bandwidth limitations are studied.
Loading relativistic Maxwell distributions in particle simulations
NASA Astrophysics Data System (ADS)
Zenitani, Seiji
2015-04-01
Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ≈50 % for generic cases and 100% for symmetric distributions. They can be combined with arbitrary base algorithms.
Holmes, T J; Liu, Y H
1989-11-15
A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.
A new algorithm for attitude-independent magnetometer calibration
NASA Technical Reports Server (NTRS)
Alonso, Roberto; Shuster, Malcolm D.
1994-01-01
A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.
Empirical study of parallel LRU simulation algorithms
NASA Technical Reports Server (NTRS)
Carr, Eric; Nicol, David M.
1994-01-01
This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.
A clustering method of Chinese medicine prescriptions based on modified firefly algorithm.
Yuan, Feng; Liu, Hong; Chen, Shou-Qiang; Xu, Liang
2016-12-01
This paper is aimed to study the clustering method for Chinese medicine (CM) medical cases. The traditional K-means clustering algorithm had shortcomings such as dependence of results on the selection of initial value, trapping in local optimum when processing prescriptions form CM medical cases. Therefore, a new clustering method based on the collaboration of firefly algorithm and simulated annealing algorithm was proposed. This algorithm dynamically determined the iteration of firefly algorithm and simulates sampling of annealing algorithm by fitness changes, and increased the diversity of swarm through expansion of the scope of the sudden jump, thereby effectively avoiding premature problem. The results from confirmatory experiments for CM medical cases suggested that, comparing with traditional K-means clustering algorithms, this method was greatly improved in the individual diversity and the obtained clustering results, the computing results from this method had a certain reference value for cluster analysis on CM prescriptions.
TSaT-MUSIC: a novel algorithm for rapid and accurate ultrasonic 3D localization
NASA Astrophysics Data System (ADS)
Mizutani, Kyohei; Ito, Toshio; Sugimoto, Masanori; Hashizume, Hiromichi
2011-12-01
We describe a fast and accurate indoor localization technique using the multiple signal classification (MUSIC) algorithm. The MUSIC algorithm is known as a high-resolution method for estimating directions of arrival (DOAs) or propagation delays. A critical problem in using the MUSIC algorithm for localization is its computational complexity. Therefore, we devised a novel algorithm called Time Space additional Temporal-MUSIC, which can rapidly and simultaneously identify DOAs and delays of mul-ticarrier ultrasonic waves from transmitters. Computer simulations have proved that the computation time of the proposed algorithm is almost constant in spite of increasing numbers of incoming waves and is faster than that of existing methods based on the MUSIC algorithm. The robustness of the proposed algorithm is discussed through simulations. Experiments in real environments showed that the standard deviation of position estimations in 3D space is less than 10 mm, which is satisfactory for indoor localization.
Bravyi-Kitaev Superfast simulation of electronic structure on a quantum computer.
Setia, Kanav; Whitfield, James D
2018-04-28
Present quantum computers often work with distinguishable qubits as their computational units. In order to simulate indistinguishable fermionic particles, it is first required to map the fermionic state to the state of the qubits. The Bravyi-Kitaev Superfast (BKSF) algorithm can be used to accomplish this mapping. The BKSF mapping has connections to quantum error correction and opens the door to new ways of understanding fermionic simulation in a topological context. Here, we present the first detailed exposition of the BKSF algorithm for molecular simulation. We provide the BKSF transformed qubit operators and report on our implementation of the BKSF fermion-to-qubits transform in OpenFermion. In this initial study of a hydrogen molecule we have compared BKSF, Jordan-Wigner, and Bravyi-Kitaev transforms under the Trotter approximation. The gate count to implement BKSF is lower than Jordan-Wigner but higher than Bravyi-Kitaev. We considered different orderings of the exponentiated terms and found lower Trotter errors than the previously reported for Jordan-Wigner and Bravyi-Kitaev algorithms. These results open the door to the further study of the BKSF algorithm for quantum simulation.
NASA Astrophysics Data System (ADS)
Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2017-10-01
Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.
Numerical heating in Particle-In-Cell simulations with Monte Carlo binary collisions
NASA Astrophysics Data System (ADS)
Alves, E. Paulo; Mori, Warren; Fiuza, Frederico
2017-10-01
The binary Monte Carlo collision (BMCC) algorithm is a robust and popular method to include Coulomb collision effects in Particle-in-Cell (PIC) simulations of plasmas. While a number of works have focused on extending the validity of the model to different physical regimes of temperature and density, little attention has been given to the fundamental coupling between PIC and BMCC algorithms. Here, we show that the coupling between PIC and BMCC algorithms can give rise to (nonphysical) numerical heating of the system, that can be far greater than that observed when these algorithms operate independently. This deleterious numerical heating effect can significantly impact the evolution of the simulated system particularly for long simulation times. In this work, we describe the source of this numerical heating, and derive scaling laws for the numerical heating rates based on the numerical parameters of PIC-BMCC simulations. We compare our theoretical scalings with PIC-BMCC numerical experiments, and discuss strategies to minimize this parasitic effect. This work is supported by DOE FES under FWP 100237 and 100182.
40 CFR 426.60 - Applicability; description of the automotive glass tempering subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... automotive glass tempering subcategory. 426.60 Section 426.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Automotive Glass Tempering Subcategory § 426.60 Applicability; description of the automotive glass tempering...
40 CFR 426.60 - Applicability; description of the automotive glass tempering subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... automotive glass tempering subcategory. 426.60 Section 426.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Automotive Glass Tempering Subcategory § 426.60 Applicability; description of the automotive glass tempering...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less
1991-06-01
algorithms (for the analysis of mechanisms), traditional numerical simulation methods, and algorithms that examine the (continued on back) 14. SUBJECT TERMS ...7540-01-280.S500 )doo’c -O• 98 (; : 89) 2YB Block 13 continued: simulation results and reinterpret them in qualitative terms . Moreover...simulation results and reinterpret them in qualitative terms . Moreover, the Workbench can use symbolic procedures to help guide or simplify the task
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-08
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.
Weakly supervised classification in high energy physics
Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; ...
2017-05-01
As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less
Weakly supervised classification in high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco
As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less
NASA Astrophysics Data System (ADS)
Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.
2017-12-01
In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
NASA Astrophysics Data System (ADS)
Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia
The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.
Updates to Multi-Dimensional Flux Reconstruction for Hypersonic Simulations on Tetrahedral Grids
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2010-01-01
The quality of simulated hypersonic stagnation region heating with tetrahedral meshes is investigated by using an updated three-dimensional, upwind reconstruction algorithm for the inviscid flux vector. An earlier implementation of this algorithm provided improved symmetry characteristics on tetrahedral grids compared to conventional reconstruction methods. The original formulation however displayed quantitative differences in heating and shear that were as large as 25% compared to a benchmark, structured-grid solution. The primary cause of this discrepancy is found to be an inherent inconsistency in the formulation of the flux limiter. The inconsistency is removed by employing a Green-Gauss formulation of primitive gradients at nodes to replace the previous Gram-Schmidt algorithm. Current results are now in good agreement with benchmark solutions for two challenge problems: (1) hypersonic flow over a three-dimensional cylindrical section with special attention to the uniformity of the solution in the spanwise direction and (2) hypersonic flow over a three-dimensional sphere. The tetrahedral cells used in the simulation are derived from a structured grid where cell faces are bisected across the diagonal resulting in a consistent pattern of diagonals running in a biased direction across the otherwise symmetric domain. This grid is known to accentuate problems in both shock capturing and stagnation region heating encountered with conventional, quasi-one-dimensional inviscid flux reconstruction algorithms. Therefore the test problems provide a sensitive indicator for algorithmic effects on heating. Additional simulations on a sharp, double cone and the shuttle orbiter are then presented to demonstrate the capabilities of the new algorithm on more geometrically complex flows with tetrahedral grids. These results provide the first indication that pure tetrahedral elements utilizing the updated, three-dimensional, upwind reconstruction algorithm may be used for the simulation of heating and shear in hypersonic flows in upwind, finite volume formulations.
Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.
Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A
2018-02-01
A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
Model reference adaptive control of robots
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo
1991-01-01
This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.
An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks
Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed
2016-01-01
Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586
Stress-state effects on the stress-induced martensitic transformation of carburized 4320 steels
NASA Astrophysics Data System (ADS)
Karaman, I.; Balzer, M.; Sehitoglu, Huseyin; Maier, H. J.
1998-02-01
The effect of different stress states on the stress-induced martensitic transformation of retained austenite was investigated in carburized 4320 steels with an initial retained austenite content of 15 pct. Experiments were conducted utilizing a specialized pressure rig and comparison between stress-strain behaviors of specimens with different austenitization and tempering histories was performed under these stress states. Experimental results indicated considerable asymmetry between tension and compression, with triaxial stress states resulting in the highest strength levels for the untempered material. Fine carbide precipitates due to low-temperature tempering increased the strength and ductility of the specimens and also changed the austenite-to-martensite transformation behavior. Numerical simulations of stress-strain behaviors under different stress states were obtained, with an existing micromechanical self-consistent framework utilizing the crystallographic theory of austenite/martensite transformation and the minimum complementary free-energy principle. The model was modified for carburized steels upon microstructural investigation and predicted the same trends in effective stress-effective strain behavior as observed experimentally.
Free Energy Landscape of GAGA and UUCG RNA Tetraloops.
Bottaro, Sandro; Banáš, Pavel; Šponer, Jiří; Bussi, Giovanni
2016-10-20
We report the folding thermodynamics of ccUUCGgg and ccGAGAgg RNA tetraloops using atomistic molecular dynamics simulations. We obtain a previously unreported estimation of the folding free energy using parallel tempering in combination with well-tempered metadynamics. A key ingredient is the use of a recently developed metric distance, eRMSD, as a biased collective variable. We find that the native fold of both tetraloops is not the global free energy minimum using the Amberχ OL3 force field. The estimated folding free energies are 30.2 ± 0.5 kJ/mol for UUCG and 7.5 ± 0.6 kJ/mol for GAGA, in striking disagreement with experimental data. We evaluate the viability of all possible one-dimensional backbone force field corrections. We find that disfavoring the gauche + region of α and ζ angles consistently improves the existing force field. The level of accuracy achieved with these corrections, however, cannot be considered sufficient by judging on the basis of available thermodynamic data and solution experiments.
Climate-driven extinctions shape the phylogenetic structure of temperate tree floras.
Eiserhardt, Wolf L; Borchsenius, Finn; Plum, Christoffer M; Ordonez, Alejandro; Svenning, Jens-Christian
2015-03-01
When taxa go extinct, unique evolutionary history is lost. If extinction is selective, and the intrinsic vulnerabilities of taxa show phylogenetic signal, more evolutionary history may be lost than expected under random extinction. Under what conditions this occurs is insufficiently known. We show that late Cenozoic climate change induced phylogenetically selective regional extinction of northern temperate trees because of phylogenetic signal in cold tolerance, leading to significantly and substantially larger than random losses of phylogenetic diversity (PD). The surviving floras in regions that experienced stronger extinction are phylogenetically more clustered, indicating that non-random losses of PD are of increasing concern with increasing extinction severity. Using simulations, we show that a simple threshold model of survival given a physiological trait with phylogenetic signal reproduces our findings. Our results send a strong warning that we may expect future assemblages to be phylogenetically and possibly functionally depauperate if anthropogenic climate change affects taxa similarly. © 2015 John Wiley & Sons Ltd/CNRS.
High Load Ratio Fatigue Strength and Mean Stress Evolution of Quenched and Tempered 42CrMo4 Steel
NASA Astrophysics Data System (ADS)
Bertini, Leonardo; Le Bone, Luca; Santus, Ciro; Chiesi, Francesco; Tognarelli, Leonardo
2017-08-01
The fatigue strength at a high number of cycles with initial elastic-plastic behavior was experimentally investigated on quenched and tempered 42CrMo4 steel. Fatigue tests on unnotched specimens were performed both under load and strain controls, by imposing various levels of amplitude and with several high load ratios. Different ratcheting and relaxation trends, with significant effects on fatigue, are observed and discussed, and then reported in the Haigh diagram, highlighting a clear correlation with the Smith-Watson-Topper model. High load ratio tests were also conducted on notched specimens with C (blunt) and V (sharp) geometries. A Chaboche model with three parameter couples was proposed by fitting plain specimen cyclic and relaxation tests, and then finite element analyses were performed to simulate the notched specimen test results. A significant stress relaxation at the notch root became clearly evident by reporting the numerical results in the Haigh diagram, thus explaining the low mean stress sensitivity of the notched specimens.
Carbon Pools in a Temperate Heathland Resist Changes in a Future Climate
NASA Astrophysics Data System (ADS)
Ambus, P.; Reinsch, S.; Nielsen, P. L.; Michelsen, A.; Schmidt, I. K.; Mikkelsen, T. N.
2014-12-01
The fate of recently plant assimilated carbon was followed into ecosystem carbon pools and fluxes in a temperate heathland after a 13CO2 pulse in the early growing season in a 6-year long multi-factorial climate change experiment. Eight days after the pulse, recently assimilated carbon was significantly higher in storage organs (rhizomes) of the grass Deschampsia flexuosa under elevated atmospheric CO2 concentration. Experimental drought induced a pronounced utilization of recently assimilated carbon belowground (roots, microbes, dissolved organic carbon) potentially counterbalancing limited nutrient availability. The fate of recently assimilated carbon was not affected by moderate warming. The full factorial combination of elevated CO2, warming and drought simulating future climatic conditions as expected for Denmark in 2075 did not change short-term carbon turnover significantly compared to ambient conditions. Overall, climate factors interacted in an unexpected way resulting in strong resilience of the heathland in terms of short-term carbon turnover in a future climate.
Forward-looking Assimilation of MODIS-derived Snow Covered Area into a Land Surface Model
NASA Technical Reports Server (NTRS)
Zaitchik, Benjamin F.; Rodell, Matthew
2008-01-01
Snow cover over land has a significant impact on the surface radiation budget, turbulent energy fluxes to the atmosphere, and local hydrological fluxes. For this reason, inaccuracies in the representation of snow covered area (SCA) within a land surface model (LSM) can lead to substantial errors in both offline and coupled simulations. Data assimilation algorithms have the potential to address this problem. However, the assimilation of SCA observations is complicated by an information deficit in the observation SCA indicates only the presence or absence of snow, and not snow volume and by the fact that assimilated SCA observations can introduce inconsistencies with atmospheric forcing data, leading to non-physical artifacts in the local water balance. In this paper we present a novel assimilation algorithm that introduces MODIS SCA observations to the Noah LSM in global, uncoupled simulations. The algorithm utilizes observations from up to 72 hours ahead of the model simulation in order to correct against emerging errors in the simulation of snow cover while preserving the local hydrologic balance. This is accomplished by using future snow observations to adjust air temperature and, when necessary, precipitation within the LSM. In global, offline integrations, this new assimilation algorithm provided improved simulation of SCA and snow water equivalent relative to open loop integrations and integrations that used an earlier SCA assimilation algorithm. These improvements, in turn, influenced the simulation of surface water and energy fluxes both during the snow season and, in some regions, on into the following spring.
Merritt, M.L.
1993-01-01
The simulation of the transport of injected freshwater in a thin brackish aquifer, overlain and underlain by confining layers containing more saline water, is shown to be influenced by the choice of the finite-difference approximation method, the algorithm for representing vertical advective and dispersive fluxes, and the values assigned to parametric coefficients that specify the degree of vertical dispersion and molecular diffusion that occurs. Computed potable water recovery efficiencies will differ depending upon the choice of algorithm and approximation method, as will dispersion coefficients estimated based on the calibration of simulations to match measured data. A comparison of centered and backward finite-difference approximation methods shows that substantially different transition zones between injected and native waters are depicted by the different methods, and computed recovery efficiencies vary greatly. Standard and experimental algorithms and a variety of values for molecular diffusivity, transverse dispersivity, and vertical scaling factor were compared in simulations of freshwater storage in a thin brackish aquifer. Computed recovery efficiencies vary considerably, and appreciable differences are observed in the distribution of injected freshwater in the various cases tested. The results demonstrate both a qualitatively different description of transport using the experimental algorithms and the interrelated influences of molecular diffusion and transverse dispersion on simulated recovery efficiency. When simulating natural aquifer flow in cross-section, flushing of the aquifer occurred for all tested coefficient choices using both standard and experimental algorithms. ?? 1993.
Angular Superresolution for a Scanning Antenna with Simulated Complex Scatterer-Type Targets
2002-05-01
Approved for public release; distribution unlimited. The Scan- MUSIC (MUltiple SIgnal Classification), or SMUSIC, algorithm was developed by the Millimeter...with the use of a single rotatable sensor scanning in an angular region of interest. This algorithm has been adapted and extended from the MUSIC ...simulation. Abstract ii iii Contents 1. Introduction 1 2. Extension of the MUSIC Algorithm for Scanning Antenna 2 2.1 Subvector Averaging Method
An EEG blind source separation algorithm based on a weak exclusion principle.
Lan Ma; Blu, Thierry; Wang, William S-Y
2016-08-01
The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neuroscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the proposed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources.
Airport Flight Departure Delay Model on Improved BN Structure Learning
NASA Astrophysics Data System (ADS)
Cao, Weidong; Fang, Xiangnong
An high score prior genetic simulated annealing Bayesian network structure learning algorithm (HSPGSA) by combining genetic algorithm(GA) with simulated annealing algorithm(SAA) is developed. The new algorithm provides not only with strong global search capability of GA, but also with strong local hill climb search capability of SAA. The structure with the highest score is prior selected. In the mean time, structures with lower score are also could be choice. It can avoid efficiently prematurity problem by higher score individual wrong direct growing population. Algorithm is applied to flight departure delays analysis in a large hub airport. Based on the flight data a BN model is created. Experiments show that parameters learning can reflect departure delay.
Data decomposition of Monte Carlo particle transport simulations via tally servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less
Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations
Radak, Brian K.; Roux, Benoît
2016-10-07
Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less
Monte Carlo Simulations of Radiative and Neutrino Transport under Astrophysical Conditions
NASA Astrophysics Data System (ADS)
Krivosheyev, Yu. M.; Bisnovatyi-Kogan, G. S.
2018-05-01
Monte Carlo simulations are utilized to model radiative and neutrino transfer in astrophysics. An algorithm that can be used to study radiative transport in astrophysical plasma based on simulations of photon trajectories in a medium is described. Formation of the hard X-ray spectrum of the Galactic microquasar SS 433 is considered in detail as an example. Specific requirements for applying such simulations to neutrino transport in a densemedium and algorithmic differences compared to its application to photon transport are discussed.
HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN
While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...
Stochastic reaction-diffusion algorithms for macromolecular crowding
NASA Astrophysics Data System (ADS)
Sturrock, Marc
2016-06-01
Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.
Improved simulation of regional CO2 surface concentrations using GEOS-Chem and fluxes from VEGAS
NASA Astrophysics Data System (ADS)
Chen, Z. H.; Zhu, J.; Zeng, N.
2013-08-01
CO2 measurements have been combined with simulated CO2 distributions from a transport model in order to produce the optimal estimates of CO2 surface fluxes in inverse modeling. However, one persistent problem in using model-observation comparisons for this goal relates to the issue of compatibility. Observations at a single station reflect all underlying processes of various scales. These processes usually cannot be fully resolved by model simulations at the grid points nearest the station due to lack of spatial or temporal resolution or missing processes in the model. In this study the stations in one region were grouped based on the amplitude and phase of the seasonal cycle at each station. The regionally averaged CO2 at all stations in one region represents the regional CO2 concentration of this region. The regional CO2 concentrations from model simulations and observations were used to evaluate the regional model results. The difference of the regional CO2 concentration between observation and modeled results reflects the uncertainty of the large-scale flux in the region where the grouped stations are. We compared the regional CO2 concentrations between model results with biospheric fluxes from the Carnegie-Ames-Stanford Approach (CASA) and VEgetation-Global-Atmosphere-Soil (VEGAS) models, and used observations from GLOBALVIEW-CO2 to evaluate the regional model results. The results show the largest difference of the regionally averaged values between simulations with fluxes from VEGAS and observations is less than 5 ppm for North American boreal, North American temperate, Eurasian boreal, Eurasian temperate and Europe, which is smaller than the largest difference between CASA simulations and observations (more than 5 ppm). There is still a large difference between two model results and observations for the regional CO2 concentration in the North Atlantic, Indian Ocean, and South Pacific tropics. The regionally averaged CO2 concentrations will be helpful for comparing CO2 concentrations from modeled results and observations and evaluating regional surface fluxes from different methods.
Koh, Wonryull; Blackwell, Kim T
2011-04-21
Stochastic simulation of reaction-diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction-diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies.
Equilibrium Sampling in Biomolecular Simulation
2015-01-01
Equilibrium sampling of biomolecules remains an unmet challenge after more than 30 years of atomistic simulation. Efforts to enhance sampling capability, which are reviewed here, range from the development of new algorithms to parallelization to novel uses of hardware. Special focus is placed on classifying algorithms — most of which are underpinned by a few key ideas — in order to understand their fundamental strengths and limitations. Although algorithms have proliferated, progress resulting from novel hardware use appears to be more clear-cut than from algorithms alone, partly due to the lack of widely used sampling measures. PMID:21370970
Cross counter-based adaptive assembly scheme in optical burst switching networks
NASA Astrophysics Data System (ADS)
Zhu, Zhi-jun; Dong, Wen; Le, Zi-chun; Chen, Wan-jun; Sun, Xingshu
2009-11-01
A novel adaptive assembly algorithm called Cross-counter Balance Adaptive Assembly Period (CBAAP) is proposed in this paper. The major difference between CBAAP and other adaptive assembly algorithms is that the threshold of CBAAP can be dynamically adjusted according to the cross counter and step length value. In terms of assembly period and the burst loss probability, we compare the performance of CBAAP with those of three typical algorithms FAP (Fixed Assembly Period), FBL (Fixed Burst Length) and MBMAP (Min-Burst length-Max-Assembly-Period) in the simulation part. The simulation results demonstrate the effectiveness of our algorithm.
Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays
NASA Astrophysics Data System (ADS)
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2014-10-01
We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models.
NASA Astrophysics Data System (ADS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
NASA Technical Reports Server (NTRS)
Fijany, A.; Roberts, J. A.; Jain, A.; Man, G. K.
1993-01-01
Part 1 of this paper presented the requirements for the real-time simulation of Cassini spacecraft along with some discussion of the DARTS algorithm. Here, in Part 2 we discuss the development and implementation of parallel/vectorized DARTS algorithm and architecture for real-time simulation. Development of the fast algorithms and architecture for real-time hardware-in-the-loop simulation of spacecraft dynamics is motivated by the fact that it represents a hard real-time problem, in the sense that the correctness of the simulation depends on both the numerical accuracy and the exact timing of the computation. For a given model fidelity, the computation should be computed within a predefined time period. Further reduction in computation time allows increasing the fidelity of the model (i.e., inclusion of more flexible modes) and the integration routine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Jin, Shuangshuang; Chen, Yousu
This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less
Data parallel sorting for particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1992-01-01
Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.
Requirements and Techniques for Developing and Measuring Simulant Materials
NASA Technical Reports Server (NTRS)
Rickman, Doug; Owens, Charles; Howard, Rick
2006-01-01
The 1989 workshop report entitled Workshop on Production and Uses of Simulated Lunar Materials and the Lunar Regolith Simulant Materials: Recommendations for Standardization, Production, and Usage, NASA Technical Publication identify and reinforced a need for a set of standards and requirements for the production and usage of the lunar simulant materials. As NASA need prepares to return to the moon, a set of requirements have been developed for simulant materials and methods to produce and measure those simulants have been defined. Addressed in the requirements document are: 1) a method for evaluating the quality of any simulant of a regolith, 2) the minimum Characteristics for simulants of lunar regolith, and 3) a method to produce lunar regolith simulants needed for NASA's exploration mission. A method to evaluate new and current simulants has also been rigorously defined through the mathematics of Figures of Merit (FoM), a concept new to simulant development. A single FoM is conceptually an algorithm defining a single characteristic of a simulant and provides a clear comparison of that characteristic for both the simulant and a reference material. Included as an intrinsic part of the algorithm is a minimum acceptable performance for the characteristic of interest. The algorithms for the FoM for Standard Lunar Regolith Simulants are also explicitly keyed to a recommended method to make lunar simulants.
A simple algorithm for beam profile diagnostics using a thermographic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katagiri, Ken; Hojo, Satoru; Honma, Toshihiro
2014-03-15
A new algorithm for digital image processing apparatuses is developed to evaluate profiles of high-intensity DC beams from temperature images of irradiated thin foils. Numerical analyses are performed to examine the reliability of the algorithm. To simulate the temperature images acquired by a thermographic camera, temperature distributions are numerically calculated for 20 MeV proton beams with different parameters. Noise in the temperature images which is added by the camera sensor is also simulated to account for its effect. Using the algorithm, beam profiles are evaluated from the simulated temperature images and compared with exact solutions. We find that niobium ismore » an appropriate material for the thin foil used in the diagnostic system. We also confirm that the algorithm is adaptable over a wide beam current range of 0.11–214 μA, even when employing a general-purpose thermographic camera with rather high noise (ΔT{sub NETD} ≃ 0.3 K; NETD: noise equivalent temperature difference)« less
Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming
2017-02-01
The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.
SIMULATION OF AEROSOL DYNAMICS: A COMPARATIVE REVIEW OF ALGORITHMS USED IN AIR QUALITY MODELS
A comparative review of algorithms currently used in air quality models to simulate aerosol dynamics is presented. This review addresses coagulation, condensational growth, nucleation, and gas/particle mass transfer. Two major approaches are used in air quality models to repres...
FERN - a Java framework for stochastic simulation and evaluation of reaction networks.
Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf
2008-08-29
Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new systems biology applications. Finally, complex scenarios requiring intervention during the simulation progress can be modelled easily with FERN.
Weighted Global Artificial Bee Colony Algorithm Makes Gas Sensor Deployment Efficient
Jiang, Ye; He, Ziqing; Li, Yanhai; Xu, Zhengyi; Wei, Jianming
2016-01-01
This paper proposes an improved artificial bee colony algorithm named Weighted Global ABC (WGABC) algorithm, which is designed to improve the convergence speed in the search stage of solution search equation. The new method not only considers the effect of global factors on the convergence speed in the search phase, but also provides the expression of global factor weights. Experiment on benchmark functions proved that the algorithm can improve the convergence speed greatly. We arrive at the gas diffusion concentration based on the theory of CFD and then simulate the gas diffusion model with the influence of buildings based on the algorithm. Simulation verified the effectiveness of the WGABC algorithm in improving the convergence speed in optimal deployment scheme of gas sensors. Finally, it is verified that the optimal deployment method based on WGABC algorithm can improve the monitoring efficiency of sensors greatly as compared with the conventional deployment methods. PMID:27322262
NASA Astrophysics Data System (ADS)
Herbirowo, Satrio; Adjiantoro, Bintang; Romijarso, Toni Bambang; Pramono, Andika Widya
2018-05-01
High demand of armor material impacts on the use of lateritic steel as alternative armored material, therefore an increase of its mechanical properties is necessary. Quenching and tempering process can be used to increase the mechanical properties of the lateritic steel. The variables that used in this research are variation in media quench (water, oil, and air) and variation in tempering temperatures (0, 100, and 200 °C). The results show that specimen with water quenchant tempered at 100 °C have the highest average on hardness (59.1 HRC) and tensile strength. Specimen with oil quenchant tempered at 100 °C has the highest impact toughness (52 J). Secondary hardening and tempered martensite embrittlement phenomenon is found in some specimens where its hardness increased and its impact toughness decreased after the tempering process. Microstructures which formed in this process are martensite and retained austenite phase with fracture types are brittle.
NASA Astrophysics Data System (ADS)
Cao, X. Y.; Zhu, P.; Yong, Q.; Liu, T. G.; Lu, Y. H.; Zhao, J. C.; Jiang, Y.; Shoji, T.
2018-02-01
Effect of tempering on low cycle fatigue (LCF) behaviors of nuclear-grade deposited weld metal was investigated, and The LCF tests were performed at 350 °C with strain amplitudes ranging from 0.2% to 0.6%. The results showed that at a low strain amplitude, deposited weld metal tempered for 1 h had a high fatigue resistance due to high yield strength, while at a high strain amplitude, the one tempered for 24 h had a superior fatigue resistance due to high ductility. Deposited weld metal tempered for 1 h exhibited cyclic hardening at the tested strain amplitudes. Deposited weld metal tempered for 24 h exhibited cyclic hardening at a low strain amplitude but cyclic softening at a high strain amplitude. Existence and decomposition of martensite-austenite (M-A) islands as well as dislocations activities contributed to fatigue property discrepancy among the two tempered deposited weld metal.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-05-01
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
PROCESS SIMULATION OF COLD PRESSING OF ARMSTRONG CP-Ti POWDERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabau, Adrian S; Gorti, Sarma B; Peter, William H
A computational methodology is presented for the process simulation of cold pressing of Armstrong CP-Ti Powders. The computational model was implemented in the commercial finite element program ABAQUSTM. Since the powder deformation and consolidation is governed by specific pressure-dependent constitutive equations, several solution algorithms were developed for the ABAQUS user material subroutine, UMAT. The solution algorithms were developed for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations that describe the evolution of the state variables. Since ABAQUS requires the use of a full Newton-Raphson algorithm for the stress-strainmore » equations, an algorithm for obtaining the tangent/linearization moduli, which is consistent with the return-mapping algorithm, also was developed. Numerical simulation results are presented for the cold compaction of the Ti powders. Several simulations were conducted for cylindrical samples with different aspect ratios. The numerical simulation results showed that for the disk samples, the minimum von Mises stress was approximately half than its maximum value. The hydrostatic stress distribution exhibits a variation smaller than that of the von Mises stress. It was found that for the disk and cylinder samples the minimum hydrostatic stresses were approximately 23 and 50% less than its maximum value, respectively. It was also found that the minimum density was noticeably affected by the sample height.« less
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
Load Balancing Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearce, Olga Tkachyshyn
2014-12-01
The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one atmore » the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.« less
NASA Technical Reports Server (NTRS)
Fitzjerrell, D. G.; Grounds, D. J.; Leonard, J. I.
1975-01-01
Using a whole body algorithm simulation model, a wide variety and large number of stresses as well as different stress levels were simulated including environmental disturbances, metabolic changes, and special experimental situations. Simulation of short term stresses resulted in simultaneous and integrated responses from the cardiovascular, respiratory, and thermoregulatory subsystems and the accuracy of a large number of responding variables was verified. The capability of simulating significantly longer responses was demonstrated by validating a four week bed rest study. In this case, the long term subsystem model was found to reproduce many experimentally observed changes in circulatory dynamics, body fluid-electrolyte regulation, and renal function. The value of systems analysis and the selected design approach for developing a whole body algorithm was demonstrated.
A piloted simulator evaluation of a ground-based 4-D descent advisor algorithm
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Green, Steven M.; Erzberger, Heinz
1990-01-01
A ground-based, four dimensional (4D) descent-advisor algorithm is under development at NASA-Ames. The algorithm combines detailed aerodynamic, propulsive, and atmospheric models with an efficient numerical integration scheme to generate 4D descent advisories. The ability is investigated of the 4D descent advisor algorithm to provide adequate control of arrival time for aircraft not equipped with on-board 4D guidance systems. A piloted simulation was conducted to determine the precision with which the descent advisor could predict the 4D trajectories of typical straight-in descents flown by airline pilots under different wind conditions. The effects of errors in the estimation of wind and initial aircraft weight were also studied. A description of the descent advisor as well as the result of the simulation studies are presented.
Simulation Study on Missile Penetration Based on LS - DYNA
NASA Astrophysics Data System (ADS)
Tang, Jue; Sun, Xinli
2017-12-01
Penetrating the shell armor is an effective means of destroying hard targets with multiple layers of protection. The penetration process is a high-speed impact dynamics research category, involving high pressure, high temperature, high speed and internal material damage, including plugging, penetration, spalling, caving, splashing and other complex forms, therefore, Analysis is one of the difficulties in the study of impact dynamics. In this paper, the Lagrang algorithm and the SPH algorithm are used to analyze the penetrating steel plate, and the penetration model of the rocket penetrating the steel plate, the failure mode of the steel plate and the missile and the advantages and disadvantages of Lagrang algorithm and SPH algorithm in the simulation of high-speed collision problem are analyzed and compared, which provides a reference for the study of simulation collision problem.
Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento
We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reactionmore » rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.« less