Science.gov

Sample records for algorithms simulated annealing

  1. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  2. Kriging-approximation simulated annealing algorithm for groundwater modeling

    NASA Astrophysics Data System (ADS)

    Shen, C. H.

    2015-12-01

    Optimization algorithms are often applied to search best parameters for complex groundwater models. Running the complex groundwater models to evaluate objective function might be time-consuming. This research proposes a Kriging-approximation simulated annealing algorithm. Kriging is a spatial statistics method used to interpolate unknown variables based on surrounding given data. In the algorithm, Kriging method is used to estimate complicate objective function and is incorporated with simulated annealing. The contribution of the Kriging-approximation simulated annealing algorithm is to reduce calculation time and increase efficiency.

  3. An improved simulated annealing algorithm for standard cell placement

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1988-01-01

    Simulated annealing is a general purpose Monte Carlo optimization technique that was applied to the problem of placing standard logic cells in a VLSI ship so that the total interconnection wire length is minimized. An improved standard cell placement algorithm that takes advantage of the performance enhancements that appear to come from parallelizing the uniprocessor simulated annealing algorithm is presented. An outline of this algorithm is given.

  4. A theoretical comparison of evolutionary algorithms and simulated annealing

    SciTech Connect

    Hart, W.E.

    1995-08-28

    This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.

  5. Selection of views to materialize using simulated annealing algorithms

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin

    2002-03-01

    A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.

  6. Variable neighbourhood simulated annealing algorithm for capacitated vehicle routing problems

    NASA Astrophysics Data System (ADS)

    Xiao, Yiyong; Zhao, Qiuhong; Kaku, Ikou; Mladenovic, Nenad

    2014-04-01

    This article presents the variable neighbourhood simulated annealing (VNSA) algorithm, a variant of the variable neighbourhood search (VNS) combined with simulated annealing (SA), for efficiently solving capacitated vehicle routing problems (CVRPs). In the new algorithm, the deterministic 'Move or not' criterion of the original VNS algorithm regarding the incumbent replacement is replaced by an SA probability, and the neighbourhood shifting of the original VNS (from near to far by k← k+1) is replaced by a neighbourhood shaking procedure following a specified rule. The geographical neighbourhood structure is introduced in constructing the neighbourhood structures for the CVRP of the string model. The proposed algorithm is tested against 39 well-known benchmark CVRP instances of different scales (small/middle, large, very large). The results show that the VNSA algorithm outperforms most existing algorithms in terms of computational effectiveness and efficiency, showing good performance in solving large and very large CVRPs.

  7. Rayleigh wave inversion using heat-bath simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yongxu; Peng, Suping; Du, Wenfeng; Zhang, Xiaoyang; Ma, Zhenyuan; Lin, Peng

    2016-11-01

    The dispersion of Rayleigh waves can be used to obtain near-surface shear (S)-wave velocity profiles. This is performed mainly by inversion of the phase velocity dispersion curves, which has been proven to be a highly nonlinear and multimodal problem, and it is unsuitable to use local search methods (LSMs) as the inversion algorithm. In this study, a new strategy is proposed based on a variant of simulated annealing (SA) algorithm. SA, which simulates the annealing procedure of crystalline solids in nature, is one of the global search methods (GSMs). There are many variants of SA, most of which contain two steps: the perturbation of model and the Metropolis-criterion-based acceptance of the new model. In this paper we propose a one-step SA variant known as heat-bath SA. To test the performance of the heat-bath SA, two models are created. Both noise-free and noisy synthetic data are generated. Levenberg-Marquardt (LM) algorithm and a variant of SA, known as the fast simulated annealing (FSA) algorithm, are also adopted for comparison. The inverted results of the synthetic data show that the heat-bath SA algorithm is a reasonable choice for Rayleigh wave dispersion curve inversion. Finally, a real-world inversion example from a coal mine in northwestern China is shown, which proves that the scheme we propose is applicable.

  8. Combined simulated annealing algorithm for the discrete facility location problem.

    PubMed

    Qin, Jin; Ni, Ling-Lin; Shi, Feng

    2012-01-01

    The combined simulated annealing (CSA) algorithm was developed for the discrete facility location problem (DFLP) in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.

  9. Simulated annealing algorithm applied in adaptive near field beam shaping

    NASA Astrophysics Data System (ADS)

    Yu, Zhan; Ma, Hao-tong; Du, Shao-jun

    2010-11-01

    Laser beam shaping is required in many applications for improving the efficiency of the laser systems. In this paper, the near field beam shaping based on the combination of simulated annealing algorithm and Zernike polynomials is demonstrated. Considering phase distribution can be represented by the expansion of Zernike polynomials, the problem of searching appropriate phase distribution can be changed into a problem of optimizing a vector made up of Zernike coefficients. The feasibility of this method is validated theoretically by translating the Gaussian beam into square quasi-flattop beam in the near field. Finally, the closed control loop system constituted by phase only liquid crystal spatial light modulator and simulated annealing algorithm is used to prove the validity of the technique. The experiment results show that the system can generate laser beam with desired intensity distributions.

  10. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    PubMed Central

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  11. Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.

    1990-01-01

    Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.

  12. Application of Simulated Annealing and Related Algorithms to TWTA Design

    NASA Technical Reports Server (NTRS)

    Radke, Eric M.

    2004-01-01

    Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is

  13. Simulated annealing versus quantum annealing

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    Based on simulated classical annealing and simulated quantum annealing using quantum Monte Carlo (QMC) simulations I will explore the question where physical or simulated quantum annealers may outperform classical optimization algorithms. Although the stochastic dynamics of QMC simulations is not the same as the unitary dynamics of a quantum system, I will first show that for the problem of quantum tunneling between two local minima both QMC simulations and a physical system exhibit the same scaling of tunneling times with barrier height. The scaling in both cases is O (Δ2) , where Δ is the tunneling splitting. An important consequence is that QMC simulations can be used to predict the performance of a quantum annealer for tunneling through a barrier. Furthermore, by using open instead of periodic boundary conditions in imaginary time, equivalent to a projector QMC algorithm, one obtains a quadratic speedup for QMC, and achieve linear scaling in Δ. I will then address the apparent contradiction between experiments on a D-Wave 2 system that failed to see evidence of quantum speedup and previous QMC results that indicated an advantage of quantum annealing over classical annealing for spin glasses. We find that this contradiction is resolved by taking the continuous time limit in the QMC simulations which then agree with the experimentally observed behavior and show no speedup for 2D spin glasses. However, QMC simulations with large time steps gain further advantage: they ``cheat'' by ignoring what happens during a (large) time step, and can thus outperform both simulated quantum annealers and classical annealers. I will then address the question how to optimally run a simulated or physical quantum annealer. Investigating the behavior of the tails of the distribution of runtimes for very hard instances we find that adiabatically slow annealing is far from optimal. On the contrary, many repeated relatively fast annealing runs can be orders of magnitude faster for

  14. Quantum simulated annealing

    NASA Astrophysics Data System (ADS)

    Boixo, Sergio; Somma, Rolando; Barnum, Howard

    2008-03-01

    We develop a quantum algorithm to solve combinatorial optimization problems through quantum simulation of a classical annealing process. Our algorithm combines techniques from quantum walks and quantum phase estimation, and can be viewed as the quantum analogue of the discrete-time Markov Chain Monte Carlo implementation of classical simulated annealing.

  15. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.

    PubMed

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  16. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  17. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    SciTech Connect

    Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  18. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  19. Research on coal-mine gas monitoring system controlled by annealing simulating algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Mengran; Li, Zhenbi

    2007-12-01

    This paper introduces the principle and schematic diagram of gas monitoring system by means of infrared method. Annealing simulating algorithm is adopted to find the whole optimum solution and the Metroplis criterion is used to make iterative algorithm combination optimization by control parameter decreasing aiming at solving large-scale combination optimization problem. Experiment result obtained by the performing scheme of realizing algorithm training and flow of realizing algorithm training indicates that annealing simulating algorithm applied to identify gas is better than traditional linear local search method. It makes the algorithm iterate to the optimum value rapidly so that the quality of the solution is improved efficiently. The CPU time is shortened and the identifying rate of gas is increased. For the mines with much-gas gushing fatalness the regional danger and disaster advanced forecast can be realized. The reliability of coal-mine safety is improved.

  20. Multiobjective optimization with a modified simulated annealing algorithm for external beam radiotherapy treatment planning

    SciTech Connect

    Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel

    2006-12-15

    Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented.

  1. Experiences with serial and parallel algorithms for channel routing using simulated annealing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1988-01-01

    Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.

  2. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    NASA Technical Reports Server (NTRS)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  3. Hybrid Simulated Annealing and Genetic Algorithms for Industrial Production Management Problems

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian; Barsoum, Nader

    2009-08-01

    This paper describes the origin and significant contribution on the development of the Hybrid Simulated Annealing and Genetic Algorithms (HSAGA) approach for finding global optimization. HSAGA provide an insight approach to handle in solving complex optimization problems. The method is, the combination of meta-heuristic approaches of Simulated Annealing and novel Genetic Algorithms for solving a non-linear objective function with uncertain technical coefficients in an industrial production management problems. The proposed novel hybrid method is designed to search for global optimal for the non-linear objective function and search for the best feasible solutions of the decision variables. Simulated experiments were carried out rigorously to reflect the advantages of the proposed method. A description of the well developed method and the advanced computational experiment with MATLAB technical tool is presented. An industrial production management optimization problem is solved using HSAGA technique. The results are very much promising.

  4. A Simulated Annealing Algorithm for the Optimization of Multistage Depressed Collector Efficiency

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.; Wilson, Jeffrey D.; Bulson, Brian A.

    2002-01-01

    The microwave traveling wave tube amplifier (TWTA) is widely used as a high-power transmitting source for space and airborne communications. One critical factor in designing a TWTA is the overall efficiency. However, overall efficiency is highly dependent upon collector efficiency; so collector design is critical to the performance of a TWTA. Therefore, NASA Glenn Research Center has developed an optimization algorithm based on Simulated Annealing to quickly design highly efficient multi-stage depressed collectors (MDC).

  5. The Research on Web-Based Testing Environment Using Simulated Annealing Algorithm

    PubMed Central

    2014-01-01

    The computerized evaluation is now one of the most important methods to diagnose learning; with the application of artificial intelligence techniques in the field of evaluation, the computerized adaptive testing gradually becomes one of the most important evaluation methods. In this test, the computer dynamic updates the learner's ability level and selects tailored items from the item pool. In order to meet the needs of the test it requires that the system has a relatively high efficiency of the implementation. To solve this problem, we proposed a novel method of web-based testing environment based on simulated annealing algorithm. In the development of the system, through a series of experiments, we compared the simulated annealing method and other methods of the efficiency and efficacy. The experimental results show that this method ensures choosing nearly optimal items from the item bank for learners, meeting a variety of assessment needs, being reliable, and having valid judgment in the ability of learners. In addition, using simulated annealing algorithm to solve the computing complexity of the system greatly improves the efficiency of select items from system and near-optimal solutions. PMID:24959600

  6. Broadband diffusion metasurface based on a single anisotropic element and optimized by the Simulated Annealing algorithm

    PubMed Central

    Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei

    2016-01-01

    We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the “0” and “1” elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements. PMID:27034110

  7. Broadband diffusion metasurface based on a single anisotropic element and optimized by the Simulated Annealing algorithm.

    PubMed

    Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei

    2016-04-01

    We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the "0" and "1" elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements.

  8. Simulated annealing algorithm for solving chambering student-case assignment problem

    NASA Astrophysics Data System (ADS)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  9. Use of a simulated annealing algorithm to fit compartmental models with an application to fractal pharmacokinetics.

    PubMed

    Marsh, Rebeccah E; Riauka, Terence A; McQuarrie, Steve A

    2007-01-01

    Increasingly, fractals are being incorporated into pharmacokinetic models to describe transport and chemical kinetic processes occurring in confined and heterogeneous spaces. However, fractal compartmental models lead to differential equations with power-law time-dependent kinetic rate coefficients that currently are not accommodated by common commercial software programs. This paper describes a parameter optimization method for fitting individual pharmacokinetic curves based on a simulated annealing (SA) algorithm, which always converged towards the global minimum and was independent of the initial parameter values and parameter bounds. In a comparison using a classical compartmental model, similar fits by the Gauss-Newton and Nelder-Mead simplex algorithms required stringent initial estimates and ranges for the model parameters. The SA algorithm is ideal for fitting a wide variety of pharmacokinetic models to clinical data, especially those for which there is weak prior knowledge of the parameter values, such as the fractal models.

  10. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  11. Reconstruction of the vertical electron density profile based on vertical TEC using the simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Chunhua; Yang, Guobin; Zhu, Peng; Nishioka, Michi; Yokoyama, Tatsuhiro; Zhou, Chen; Song, Huan; Lan, Ting; Zhao, Zhengyu; Zhang, Yuannong

    2016-05-01

    This paper presents a new method to reconstruct the vertical electron density profile based on vertical Total Electron Content (TEC) using the simulated annealing algorithm. The present technique used the Quasi-parabolic segments (QPS) to model the bottomside ionosphere. The initial parameters of the ionosphere model were determined from both International Reference Ionosphere (IRI) (Bilitza et al., 2014) and vertical TEC (vTEC). Then, the simulated annealing algorithm was used to search the best-fit parameters of the ionosphere model by comparing with the GPS-TEC. The performance and robust of this technique were verified by ionosonde data. The critical frequency (foF2) and peak height (hmF2) of the F2 layer obtained from ionograms recorded at different locations and on different days were compared with those calculated by the proposed method. The analysis of results shows that the present method is inspiring for obtaining foF2 from vTEC. However, the accuracy of hmF2 needs to be improved in the future work.

  12. [The utility boiler low NOx combustion optimization based on ANN and simulated annealing algorithm].

    PubMed

    Zhou, Hao; Qian, Xinping; Zheng, Ligang; Weng, Anxin; Cen, Kefa

    2003-11-01

    With the developing restrict environmental protection demand, more attention was paid on the low NOx combustion optimizing technology for its cheap and easy property. In this work, field experiments on the NOx emissions characteristics of a 600 MW coal-fired boiler were carried out, on the base of the artificial neural network (ANN) modeling, the simulated annealing (SA) algorithm was employed to optimize the boiler combustion to achieve a low NOx emissions concentration, and the combustion scheme was obtained. Two sets of SA parameters were adopted to find a better SA scheme, the result show that the parameters of T0 = 50 K, alpha = 0.6 can lead to a better optimizing process. This work can give the foundation of the boiler low NOx combustion on-line control technology.

  13. An infrared achromatic quarter-wave plate designed based on simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Huang, Zhanhua; Yang, Huaidong

    2017-03-01

    Quarter-wave plates are primarily used to change the polarization state of light. Their retardation usually varies depending on the wavelength of the incident light. In this paper, the design and characteristics of an achromatic quarter-wave plate, which is formed by a cascaded system of birefringent plates, are studied. For the analysis of the combination, we use Jones matrix method to derivate the general expressions of the equivalent retardation and the equivalent azimuth. The infrared achromatic quarter-wave plate is designed based on the simulated annealing (SA) algorithm. The maximum retardation variation and the maximum azimuth variation of this achromatic waveplate are only about 1.8 ° and 0.5 ° , respectively, over the entire wavelength range of 1250-1650 nm. This waveplate can change the linear polarized light into circular polarized light with a less than 3.2% degree of linear polarization (DOLP) over that wide wavelength range.

  14. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  15. A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.

    PubMed

    Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel

    2015-03-01

    Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions.

  16. Adaptive MANET Multipath Routing Algorithm Based on the Simulated Annealing Approach

    PubMed Central

    Kim, Sungwook

    2014-01-01

    Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes. PMID:25032241

  17. GenAnneal: Genetically modified Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Lagaris, Isaac E.

    2006-05-01

    A modification of the standard Simulated Annealing (SA) algorithm is presented for finding the global minimum of a continuous multidimensional, multimodal function. We report results of computational experiments with a set of test functions and we compare to methods of similar structure. The accompanying software accepts objective functions coded both in Fortran 77 and C++. Program summaryTitle of program:GenAnneal Catalogue identifier:ADXI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXI_v1_0 Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: The tool is designed to be portable in all systems running the GNU C++ compiler Installation: University of Ioannina, Greece on Linux based machines Programming language used:GNU-C++, GNU-C, GNU Fortran 77 Memory required to execute with typical data: 200 KB No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.:84 885 No. of lines in distributed program, including test data, etc.:14 896 Distribution format: tar.gz Nature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a non-linear system of equations via optimization, employing a "least squares" type of objective, one may encounter many local minima that do not correspond to solutions (i.e. they are far from zero). Typical running time: Depending on the objective function. Method of solution: We modified the process of step selection that the traditional Simulated

  18. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  19. Quantum Simulations of Classical Annealing Processes

    NASA Astrophysics Data System (ADS)

    Somma, R. D.; Boixo, S.; Barnum, H.; Knill, E.

    2008-09-01

    We describe a quantum algorithm that solves combinatorial optimization problems by quantum simulation of a classical simulated annealing process. Our algorithm exploits quantum walks and the quantum Zeno effect induced by evolution randomization. It requires order 1/δ steps to find an optimal solution with bounded error probability, where δ is the minimum spectral gap of the stochastic matrices used in the classical annealing process. This is a quadratic improvement over the order 1/δ steps required by the latter.

  20. Optimization of seasonal ARIMA models using differential evolution - simulated annealing (DESA) algorithm in forecasting dengue cases in Baguio City

    NASA Astrophysics Data System (ADS)

    Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.

    2016-10-01

    Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.

  1. Permanent prostate implant using high activity seeds and inverse planning with fast simulated annealing algorithm: A 12-year Canadian experience

    SciTech Connect

    Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca

    2007-02-01

    Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.

  2. Design of optimal pump-and-treat strategies for contaminated groundwater remediation using the simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Kuo, Chin-Hwa; Michel, Anthony N.; Gray, William G.

    The problem of the placement of pumps and the selection of pumping rates are the most important issues in designing contaminated groundwater remediation systems using a pump-and-treat strategy. Three nonlinear optimization formulations are proposed to address these problems. The first problem formulation considers hydraulic constraints and reduces the plume concentration to a specified regulation standard value within a given planning time while minimizing capital cost. The second formulation minimizes residual contaminant in a fixed period under hydraulic contraints only. The third formulation is similar to the second formulation; however, in this formulation the number of pumps is prespecified by using the results from the first formulation. The inclusion of well installation costs in the first problem formulation results in a nonsmooth objective function. For such problems, only local optimum solutions can be expected by the use of conventional nonlinear optimization techniques. In the present paper, the simulated annealing algorithm is used to overcome these difficulties. Specific simulation studies indicate that the method advanced herein is promising and involves acceptable computation times.

  3. Genetic Algorithm Based Simulated Annealing Method for Solving Unit Commitment Problem in Utility System

    NASA Astrophysics Data System (ADS)

    Rajan, C. Christober Asir

    2010-10-01

    The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.

  4. An Introduction to Simulated Annealing

    ERIC Educational Resources Information Center

    Albright, Brian

    2007-01-01

    An attempt to model the physical process of annealing lead to the development of a type of combinatorial optimization algorithm that takes on the problem of getting trapped in a local minimum. The author presents a Microsoft Excel spreadsheet that illustrates how this works.

  5. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  6. Optimal Groundwater Management: 1. Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Dougherty, David E.; Marryott, Robert A.

    1991-10-01

    Simulated annealing is introduced and applied to the optimization of groundwater management problems cast in combinatorial form. This heuristic, probabilistic optimization method seeks minima in analogy with the annealing of solids and is effective on large-scale problems. No continuity requirements are imposed on objective (cost) functions. Constraints may be added to the cost function via penalties, imposed by designation of the solution domain, or imbedded in submodels (e.g., mass balance in aquifer flow simulators) used to evaluate costs. The location of global optima may be theoretically guaranteed, but computational limitations lead to searches for nearly optimal solutions in practice. Like other optimization methods, most of the computational effort is expended in flow and transport simulators. Practical algorithmic guidance that leads to enormous computational savings and sometimes makes simulated annealing competitive with gradient-type optimization methods is provided. The method is illustrated by example applications to idealized problems of groundwater flow and selection of remediation strategy, including optimization with multiple groundwater control technologies. They demonstrate the flexibility of the method and indicate its potential for solving groundwater management problems. The application of simulated annealing to water resources problems is new and its development is immature, so further performance improvements can be expected.

  7. Simulated annealing model of acupuncture

    NASA Astrophysics Data System (ADS)

    Shang, Charles; Szu, Harold

    2015-05-01

    The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.

  8. Seismic traveltime tomography: a simulated annealing approach

    NASA Astrophysics Data System (ADS)

    Wéber, Zoltán

    2000-04-01

    Seismic traveltime tomography involves finding a velocity model that minimizes the error energy between the measured and the theoretical traveltimes. When solving this nonlinear inverse problem, a local optimization technique can easily produce a solution for which the gradient of the error energy function vanishes, but the energy function itself does not take its global minimum. Other methods such as simulated annealing can be applied to such global optimization problems. The simulated annealing approach to seismic traveltime tomography described in this paper has been tested on synthetic as well as real seismic data. It is shown that unlike local methods, the convergence of the simulated annealing algorithm is independent of the initial model: even in cases of virtually no prior information, it is capable of producing reliable results. The method can provide a number of acceptable solutions. When prior information is sparse, the solution of the global optimization can be used as an input to a local optimization procedure, such as, e.g., simultaneous iterative reconstruction technique (SIRT), producing an even more accurate result.

  9. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    PubMed

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  10. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    PubMed Central

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  11. Optimised simulated annealing for Ising spin glasses

    NASA Astrophysics Data System (ADS)

    Isakov, S. V.; Zintchenko, I. N.; Rønnow, T. F.; Troyer, M.

    2015-07-01

    We present several efficient implementations of the simulated annealing algorithm for Ising spin glasses on sparse graphs. In particular, we provide a generic code for any choice of couplings, an optimised code for bipartite graphs, and highly optimised implementations using multi-spin coding for graphs with small maximum degree and discrete couplings with a finite range. The latter codes achieve up to 50 spin flips per nanosecond on modern Intel CPUs. We also compare the performance of the codes to that of the special purpose D-Wave devices built for solving such Ising spin glass problems.

  12. a New Multimodal Multi-Criteria Route Planning Model by Integrating a Fuzzy-Ahp Weighting Method and a Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Ghaderi, F.; Pahlavani, P.

    2015-12-01

    A multimodal multi-criteria route planning (MMRP) system provides an optimal multimodal route from an origin point to a destination point considering two or more criteria in a way this route can be a combination of public and private transportation modes. In this paper, the simulate annealing (SA) and the fuzzy analytical hierarchy process (fuzzy AHP) were combined in order to find this route. In this regard, firstly, the effective criteria that are significant for users in their trip were determined. Then the weight of each criterion was calculated using the fuzzy AHP weighting method. The most important characteristic of this weighting method is the use of fuzzy numbers that aids the users to consider their uncertainty in pairwise comparison of criteria. After determining the criteria weights, the proposed SA algorithm were used for determining an optimal route from an origin to a destination. One of the most important problems in a meta-heuristic algorithm is trapping in local minima. In this study, five transportation modes, including subway, bus rapid transit (BRT), taxi, walking, and bus were considered for moving between nodes. Also, the fare, the time, the user's bother, and the length of the path were considered as effective criteria for solving the problem. The proposed model was implemented in an area in centre of Tehran in a GUI MATLAB programming language. The results showed a high efficiency and speed of the proposed algorithm that support our analyses.

  13. A Monte Carlo/simulated annealing algorithm for sequential resonance assignment in solid state NMR of uniformly labeled proteins with magic-angle spinning

    NASA Astrophysics Data System (ADS)

    Tycko, Robert; Hu, Kan-Nian

    2010-08-01

    We describe a computational approach to sequential resonance assignment in solid state NMR studies of uniformly 15N, 13C-labeled proteins with magic-angle spinning. As input, the algorithm uses only the protein sequence and lists of 15N/ 13C α crosspeaks from 2D NCACX and NCOCX spectra that include possible residue-type assignments of each crosspeak. Assignment of crosspeaks to specific residues is carried out by a Monte Carlo/simulated annealing algorithm, implemented in the program MC_ASSIGN1. The algorithm tolerates substantial ambiguity in residue-type assignments and coexistence of visible and invisible segments in the protein sequence. We use MC_ASSIGN1 and our own 2D spectra to replicate and extend the sequential assignments for uniformly-labeled HET-s(218-289) fibrils previously determined manually by Siemer et al. (J. Biomol. NMR, 34 (2006) 75-87) from a more extensive set of 2D and 3D spectra. Accurate assignments by MC_ASSIGN1 do not require data that are of exceptionally high quality. Use of MC_ASSIGN1 (and its extensions to other types of 2D and 3D data) is likely to alleviate many of the difficulties and uncertainties associated with manual resonance assignments in solid state NMR studies of uniformly labeled proteins, where spectral resolution and signal-to-noise are often sub-optimal.

  14. Hybridisations Of Simulated Annealing And Modified Simplex Algorithms On A Path Of Steepest Ascent With Multi-Response For Optimal Parameter Settings Of ACO

    NASA Astrophysics Data System (ADS)

    Luangpaiboon, P.

    2009-10-01

    Many entrepreneurs face to extreme conditions for instances; costs, quality, sales and services. Moreover, technology has always been intertwined with our demands. Then almost manufacturers or assembling lines adopt it and come out with more complicated process inevitably. At this stage, products and service improvement need to be shifted from competitors with sustainability. So, a simulated process optimisation is an alternative way for solving huge and complex problems. Metaheuristics are sequential processes that perform exploration and exploitation in the solution space aiming to efficiently find near optimal solutions with natural intelligence as a source of inspiration. One of the most well-known metaheuristics is called Ant Colony Optimisation, ACO. This paper is conducted to give an aid in complicatedness of using ACO in terms of its parameters: number of iterations, ants and moves. Proper levels of these parameters are analysed on eight noisy continuous non-linear continuous response surfaces. Considering the solution space in a specified region, some surfaces contain global optimum and multiple local optimums and some are with a curved ridge. ACO parameters are determined through hybridisations of Modified Simplex and Simulated Annealing methods on the path of Steepest Ascent, SAM. SAM was introduced to recommend preferable levels of ACO parameters via statistically significant regression analysis and Taguchi's signal to noise ratio. Other performance achievements include minimax and mean squared error measures. A series of computational experiments using each algorithm were conducted. Experimental results were analysed in terms of mean, design points and best so far solutions. It was found that results obtained from a hybridisation with stochastic procedures of Simulated Annealing method were better than that using Modified Simplex algorithm. However, the average execution time of experimental runs and number of design points using hybridisations were

  15. Comparing Monte Carlo methods for finding ground states of Ising spin glasses: Population annealing, simulated annealing, and parallel tempering

    NASA Astrophysics Data System (ADS)

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.

    2015-07-01

    Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.

  16. Comparing Monte Carlo methods for finding ground states of Ising spin glasses: Population annealing, simulated annealing, and parallel tempering.

    PubMed

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G

    2015-07-01

    Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.

  17. Applications of an MPI Enhanced Simulated Annealing Algorithm on nuSTORM and 6D Muon Cooling

    SciTech Connect

    Liu, A.

    2015-06-01

    The nuSTORM decay ring is a compact racetrack storage ring with a circumference ~480 m using large aperture ($\\phi$ = 60 cm) magnets. The design goal of the ring is to achieve a momentum acceptance of 3.8 $\\pm$10% GeV/c and a phase space acceptance of 2000 $\\mu$m·rad. The design has many challenges because the acceptance will be affected by many nonlinearity terms with large particle emittance and/or large momentum offset. In this paper, we present the application of a meta-heuristic optimization algorithm to the sextupole correction in the ring. The algorithm is capable of finding a balanced compromise among corrections of the nonlinearity terms, and finding the largest acceptance. This technique can be applied to the design of similar storage rings that store beams with wide transverse phase space and momentum spectra. We also present the recent study on the application of this algorithm to a part of the 6D muon cooling channel. The technique and the cooling concept will be applied to design a cooling channel for the extracted muon beam at nuSTORM in the future study.

  18. Classical Simulated Annealing Using Quantum Analogues

    NASA Astrophysics Data System (ADS)

    La Cour, Brian R.; Troupe, James E.; Mark, Hans M.

    2016-08-01

    In this paper we consider the use of certain classical analogues to quantum tunneling behavior to improve the performance of simulated annealing on a discrete spin system of the general Ising form. Specifically, we consider the use of multiple simultaneous spin flips at each annealing step as an analogue to quantum spin coherence as well as modifications of the Boltzmann acceptance probability to mimic quantum tunneling. We find that the use of multiple spin flips can indeed be advantageous under certain annealing schedules, but only for long anneal times.

  19. Application of artificial neural network coupled with genetic algorithm and simulated annealing to solve groundwater inflow problem to an advancing open pit mine

    NASA Astrophysics Data System (ADS)

    Bahrami, Saeed; Doulati Ardejani, Faramarz; Baafi, Ernest

    2016-05-01

    In this study, hybrid models are designed to predict groundwater inflow to an advancing open pit mine and the hydraulic head (HH) in observation wells at different distances from the centre of the pit during its advance. Hybrid methods coupling artificial neural network (ANN) with genetic algorithm (GA) methods (ANN-GA), and simulated annealing (SA) methods (ANN-SA), were utilised. Ratios of depth of pit penetration in aquifer to aquifer thickness, pit bottom radius to its top radius, inverse of pit advance time and the HH in the observation wells to the distance of observation wells from the centre of the pit were used as inputs to the networks. To achieve the objective two hybrid models consisting of ANN-GA and ANN-SA with 4-5-3-1 arrangement were designed. In addition, by switching the last argument of the input layer with the argument of the output layer of two earlier models, two new models were developed to predict the HH in the observation wells for the period of the mining process. The accuracy and reliability of models are verified by field data, results of a numerical finite element model using SEEP/W, outputs of simple ANNs and some well-known analytical solutions. Predicted results obtained by the hybrid methods are closer to the field data compared to the outputs of analytical and simple ANN models. Results show that despite the use of fewer and simpler parameters by the hybrid models, the ANN-GA and to some extent the ANN-SA have the ability to compete with the numerical models.

  20. Stochastic annealing simulation of cascades in metals

    SciTech Connect

    Heinisch, H.L.

    1996-04-01

    The stochastic annealing simulation code ALSOME is used to investigate quantitatively the differential production of mobile vacancy and SIA defects as a function of temperature for isolated 25 KeV cascades in copper generated by MD simulations. The ALSOME code and cascade annealing simulations are described. The annealing simulations indicate that the above Stage V, where the cascade vacancy clusters are unstable,m nearly 80% of the post-quench vacancies escape the cascade volume, while about half of the post-quench SIAs remain in clusters. The results are sensitive to the relative fractions of SIAs that occur in small, highly mobile clusters and large stable clusters, respectively, which may be dependent on the cascade energy.

  1. Constructing circular phylogenetic networks from weighted quartets using simulated annealing.

    PubMed

    Eslahchi, Changiz; Hassanzadeh, Reza; Mottaghi, Ehsan; Habibi, Mahnaz; Pezeshk, Hamid; Sadeghi, Mehdi

    2012-02-01

    In this paper, we present a heuristic algorithm based on the simulated annealing, SAQ-Net, as a method for constructing phylogenetic networks from weighted quartets. Similar to QNet algorithm, SAQ-Net constructs a collection of circular weighted splits of the taxa set. This collection is represented by a split network. In order to show that SAQ-Net performs better than QNet, we apply these algorithm to both the simulated and actual data sets containing salmonella, Bees, Primates and Rubber data sets. Then we draw phylogenetic networks corresponding to outputs of these algorithms using SplitsTree4 and compare the results. We find that SAQ-Net produces a better circular ordering and phylogenetic networks than QNet in most cases. SAQ-Net has been implemented in Matlab and is available for download at http://bioinf.cs.ipm.ac.ir/softwares/saq.net.

  2. Fast Object Recognition in Noisy Images Using Simulated Annealing.

    DTIC Science & Technology

    1994-12-01

    correlation coefficient is used as a measure of the match between a hypothesized object and an image. Templates are generated on-line during the search by transforming model images. Simulated annealing reduces the search time by orders of magnitude with respect to an exhaustive search. The algorithm is applied to the problem of how landmarks, for example, traffic signs, can be recognized by an autonomous vehicle or a navigating robot. The algorithm works well in noisy, real-world images of complicated scenes for model images with high information

  3. Estimation of the parameters of ETAS models by Simulated Annealing.

    PubMed

    Lombardi, Anna Maria

    2015-02-12

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  4. Estimation of the parameters of ETAS models by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Lombardi, Anna Maria

    2015-02-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  5. Estimation of the parameters of ETAS models by Simulated Annealing

    PubMed Central

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036

  6. Comparative study of the performance of quantum annealing and simulated annealing.

    PubMed

    Nishimori, Hidetoshi; Tsuda, Junichi; Knysh, Sergey

    2015-01-01

    Relations of simulated annealing and quantum annealing are studied by a mapping from the transition matrix of classical Markovian dynamics of the Ising model to a quantum Hamiltonian and vice versa. It is shown that these two operators, the transition matrix and the Hamiltonian, share the eigenvalue spectrum. Thus, if simulated annealing with slow temperature change does not encounter a difficulty caused by an exponentially long relaxation time at a first-order phase transition, the same is true for the corresponding process of quantum annealing in the adiabatic limit. One of the important differences between the classical-to-quantum mapping and the converse quantum-to-classical mapping is that the Markovian dynamics of a short-range Ising model is mapped to a short-range quantum system, but the converse mapping from a short-range quantum system to a classical one results in long-range interactions. This leads to a difference in efficiencies that simulated annealing can be efficiently simulated by quantum annealing but the converse is not necessarily true. We conclude that quantum annealing is easier to implement and is more flexible than simulated annealing. We also point out that the present mapping can be extended to accommodate explicit time dependence of temperature, which is used to justify the quantum-mechanical analysis of simulated annealing by Somma, Batista, and Ortiz. Additionally, an alternative method to solve the nonequilibrium dynamics of the one-dimensional Ising model is provided through the classical-to-quantum mapping.

  7. Mean field annealing: a formalism for constructing GNC-like algorithms.

    PubMed

    Bilbro, G L; Snyder, W E; Garnier, S J; Gault, J W

    1992-01-01

    Optimization problems are approached using mean field annealing (MFA), which is a deterministic approximation, using mean field theory and based on Peierls's inequality, to simulated annealing. The MFA mathematics are applied to three different objective function examples. In each case, MFA produces a minimization algorithm that is a type of graduated nonconvexity. When applied to the ;weak-membrane' objective, MFA results in an algorithm qualitatively identical to the published GNC algorithm. One of the examples, MFA applied to a piecewise-constant objective function, is then compared experimentally with the corresponding GNC weak-membrane algorithm. The mathematics of MFA are shown to provide a powerful and general tool for deriving optimization algorithms.

  8. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    SciTech Connect

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.

  9. Application of Chaotic Simulated Annealing in the Optimization of Task Allocation in a Multiprocessing System

    NASA Astrophysics Data System (ADS)

    Cook, Darcy; Ferens, Ken; Kinsner, Witold

    Simulated Annealing (SA) has shown to be a successful technique in optimization problems. It has been applied to both continuous function optimization problems, and combinatorial optimization problems. There has been some work in modifying the SA algorithm to apply properties of chaotic processes with the goal of reducing the time to converge to an optimal or a good solution. There are several variations of these chaotic simulated annealing (CSA) algorithms. In this paper a new variation of chaotic simulated annealing is proposed and is applied in solving a combinatorial optimization problem in multiprocessor task allocation. The experiments show the CSA algorithms reach a good solution faster than traditional SA algorithms in many cases because of a wider initial solution search.

  10. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method

    PubMed Central

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller’s scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller’s algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller’s algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller’s algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data. PMID:26958442

  11. A Parallel Simulated Annealing Approach to Solve for Earthquake Rupture Rates

    NASA Astrophysics Data System (ADS)

    Milner, K.; Page, M. T.; Field, E. H.

    2011-12-01

    We present a parallel approach to the classic simulated annealing algorithm (Kirkpatrick 1983) in order to solve for the rates of earthquake ruptures in California's complex fault system, being developed for the 3rd Uniform California Earthquake Rupture Forecast (UCERF3). Through the use of distributed computing, we have achieved substantial speedup when compared to serial simulated annealing. We will describe the parallel simulated annealing algorithm in detail, as well as the parallelization parameters used and their effect on speedup (time to convergence, or alternatively a specified energy level) and communications efficiency. Additionally we will discuss the correlation between performance of the parallel algorithm and the degree of constraints on the solution. We will present scaling results to thousands of processors, and experiences with the MPJ Express Java Message Passing Library (Baker 2006) on the University of Southern California's High Performance Computing and Communications cluster.

  12. GPU-Accelerated Population Annealing Algorithm: Frustrated Ising Antiferromagnet on the Stacked Triangular Lattice

    NASA Astrophysics Data System (ADS)

    Borovský, Michal; Weigel, Martin; Barash, Lev Yu.; Žukovič, Milan

    2016-02-01

    The population annealing algorithm is a novel approach to study systems with rough free-energy landscapes, such as spin glasses. It combines the power of simulated annealing, Boltzmann weighted differential reproduction and sequential Monte Carlo process to bring the population of replicas to the equilibrium even in the low-temperature region. Moreover, it provides a very good estimate of the free energy. The fact that population annealing algorithm is performed over a large number of replicas with many spin updates, makes it a good candidate for massive parallelism. We chose the GPU programming using a CUDA implementation to create a highly optimized simulation. It has been previously shown for the frustrated Ising antiferromagnet on the stacked triangular lattice with a ferromagnetic interlayer coupling, that standard Markov Chain Monte Carlo simulations fail to equilibrate at low temperatures due to the effect of kinetic freezing of the ferromagnetically ordered chains. We applied the population annealing to study the case with the isotropic intra- and interlayer antiferromagnetic coupling (J2/|J1| = -1). The reached ground states correspond to non-magnetic degenerate states, where chains are antiferromagnetically ordered, but there is no long-range ordering between them, which is analogical with Wannier phase of the 2D triangular Ising antiferromagnet.

  13. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    PubMed

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  14. Optimal placement of excitations and sensors by simulated annealing

    NASA Technical Reports Server (NTRS)

    Salama, Moktar; Bruno, R.; Chen, G.-S.; Garba, J.

    1989-01-01

    The optimal placement of discrete actuators and sensors is posed as a combinatorial optimization problem. Two examples for truss structures were used for illustration; the first dealt with the optimal placement of passive dampers along existing truss members, and the second dealt with the optimal placement of a combination of a set of actuators and a set of sensors. Except for the simplest problems, an exact solution by enumeration involves a very large number of function evaluations, and is therefore computationally intractable. By contrast, the simulated annealing heuristic involves far fewer evaluations and is best suited for the class of problems considered. As an optimization tool, the effectiveness of the algorithm is enhanced by introducing a number of rules that incorporate knowledge about the physical behavior of the problem. Some of the suggested rules are necessarily problem dependent.

  15. An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.

    PubMed

    Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin

    2016-06-30

    Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.

  16. Improving Simulated Annealing by Replacing Its Variables with Game-Theoretic Utility Maximizers

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Bandari, Esfandiar; Tumer, Kagan

    2001-01-01

    The game-theory field of Collective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved as a side-effect. Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting significantly improves simulated annealing for a model of an economic process run over an underlying small-worlds topology. Furthermore, these experiments reveal novel small-worlds phenomena, and highlight the shortcomings of conventional mechanism design in bounded rationality domains.

  17. Improving Simulated Annealing by Recasting it as a Non-Cooperative Game

    NASA Technical Reports Server (NTRS)

    Wolpert, David; Bandari, Esfandiar; Tumer, Kagan

    2001-01-01

    The game-theoretic field of COllective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved "as a side-effect". Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed game-theory-motivated algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting improves simulated annealing by several orders of magnitude for spin glass relaxation and bin-packing.

  18. spsann - optimization of sample patterns using spatial simulated annealing

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  19. Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias

    2015-01-01

    Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.

  20. Distributed Particle Swarm Optimization and Simulated Annealing for Energy-efficient Coverage in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    The limited energy supply of wireless sensor networks poses a great challenge for the deployment of wireless sensor nodes. In this paper, we focus on energy-efficient coverage with distributed particle swarm optimization and simulated annealing. First, the energy-efficient coverage problem is formulated with sensing coverage and energy consumption models. We consider the network composed of stationary and mobile nodes. Second, coverage and energy metrics are presented to evaluate the coverage rate and energy consumption of a wireless sensor network, where a grid exclusion algorithm extracts the coverage state and Dijkstra's algorithm calculates the lowest cost path for communication. Then, a hybrid algorithm optimizes the energy consumption, in which particle swarm optimization and simulated annealing are combined to find the optimal deployment solution in a distributed manner. Simulated annealing is performed on multiple wireless sensor nodes, results of which are employed to correct the local and global best solution of particle swarm optimization. Simulations of wireless sensor node deployment verify that coverage performance can be guaranteed, energy consumption of communication is conserved after deployment optimization and the optimization performance is boosted by the distributed algorithm. Moreover, it is demonstrated that energy efficiency of wireless sensor networks is enhanced by the proposed optimization algorithm in target tracking applications.

  1. Reconstruction of bremsstrahlung spectra from attenuation data using generalized simulated annealing.

    PubMed

    Menin, O H; Martinez, A S; Costa, A M

    2016-05-01

    A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present.

  2. A hybrid hopfield network-simulated annealing approach for frequency assignment in satellite communications systems.

    PubMed

    Salcedo-Sanz, Sancho; Santiago-Mozos, Ricardo; Bousoño-Calzón, Carlos

    2004-04-01

    A hybrid Hopfield network-simulated annealing algorithm (HopSA) is presented for the frequency assignment problem (FAP) in satellite communications. The goal of this NP-complete problem is minimizing the cochannel interference between satellite communication systems by rearranging the frequency assignment, for the systems can accommodate the increasing demands. The HopSA algorithm consists of a fast digital Hopfield neural network which manages the problem constraints hybridized with a simulated annealing which improves the quality of the solutions obtained. We analyze the problem and its formulation, describing and discussing the HopSA algorithm and solving a set of benchmark problems. The results obtained are compared with other existing approaches in order to show the performance of the HopSA approach.

  3. Quantum versus simulated annealing in wireless interference network optimization.

    PubMed

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-05-16

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking-more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.

  4. Quantum versus simulated annealing in wireless interference network optimization

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-05-01

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.

  5. Quantum versus simulated annealing in wireless interference network optimization

    PubMed Central

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-01-01

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed. PMID:27181056

  6. Neutronic optimization in high conversion Th-{sup 233}U fuel assembly with simulated annealing

    SciTech Connect

    Kotlyar, D.; Shwageraus, E.

    2012-07-01

    This paper reports on fuel design optimization of a PWR operating in a self sustainable Th-{sup 233}U fuel cycle. Monte Carlo simulated annealing method was used in order to identify the fuel assembly configuration with the most attractive breeding performance. In previous studies, it was shown that breeding may be achieved by employing heterogeneous Seed-Blanket fuel geometry. The arrangement of seed and blanket pins within the assemblies may be determined by varying the designed parameters based on basic reactor physics phenomena which affect breeding. However, the amount of free parameters may still prove to be prohibitively large in order to systematically explore the design space for optimal solution. Therefore, the Monte Carlo annealing algorithm for neutronic optimization is applied in order to identify the most favorable design. The objective of simulated annealing optimization is to find a set of design parameters, which maximizes some given performance function (such as relative period of net breeding) under specified constraints (such as fuel cycle length). The first objective of the study was to demonstrate that the simulated annealing optimization algorithm will lead to the same fuel pins arrangement as was obtained in the previous studies which used only basic physics phenomena as guidance for optimization. In the second part of this work, the simulated annealing method was used to optimize fuel pins arrangement in much larger fuel assembly, where the basic physics intuition does not yield clearly optimal configuration. The simulated annealing method was found to be very efficient in selecting the optimal design in both cases. In the future, this method will be used for optimization of fuel assembly design with larger number of free parameters in order to determine the most favorable trade-off between the breeding performance and core average power density. (authors)

  7. A deterministic annealing algorithm for a combinatorial optimization problem using replicator equations

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Kazuo; Nishiyama, Takehiro; Tsujita, Katsuyoshi

    2001-02-01

    We have proposed an optimization method for a combinatorial optimization problem using replicator equations. To improve the solution further, a deterministic annealing algorithm may be applied. During the annealing process, bifurcations of equilibrium solutions will occur and affect the performance of the deterministic annealing algorithm. In this paper, the bifurcation structure of the proposed model is analyzed in detail. It is shown that only pitchfork bifurcations occur in the annealing process, and the solution obtained by the annealing is the branch uniquely connected with the uniform solution. It is also shown experimentally that in many cases, this solution corresponds to a good approximate solution of the optimization problem. Based on the results, a deterministic annealing algorithm is proposed and applied to the quadratic assignment problem to verify its performance.

  8. Multiphase Simulated Annealing Based on Boltzmann and Bose-Einstein Distribution Applied to Protein Folding Problem

    PubMed Central

    Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J. Javier; González-Flores, Carlos

    2016-01-01

    A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA. PMID:27413369

  9. Multiphase Simulated Annealing Based on Boltzmann and Bose-Einstein Distribution Applied to Protein Folding Problem.

    PubMed

    Frausto-Solis, Juan; Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J Javier; González-Flores, Carlos; Castilla-Valdez, Guadalupe

    2016-01-01

    A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.

  10. An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities

    PubMed Central

    Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin

    2016-01-01

    Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289

  11. Evaluation of the physical annealing strategy for simulated annealing: A function-based analysis in the landscape paradigm

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2012-05-01

    The effectiveness of the actual annealing strategy in finite-time optimization by simulated annealing (SA) is analyzed by focusing on the search function of the relaxation dynamics observed in the multimodal landscape of the cost function. The rate-cycling experiment, which was introduced in the previous study [M. Hasegawa, Phys. Rev. EPLEEE81063-651X 10.1103/PhysRevE.83.036708 83, 036708 (2011)] to examine the role of the relaxation dynamics in optimization, and the temperature-cycling experiment, which was developed for a laboratory experiment on relaxation-related phenomena, are conducted on two types of random traveling salesman problems (TSPs). In each experiment, the SA search starting from a quenched solution is performed systematically under a nonmonotonic temperature control used in the actual heat treatment of metals and glasses. The results show that, as in the previous monotonic cooling from a random solution, the optimizing ability is enhanced by allocating a lot of time to the search performed near an effective intermediate temperature irrespective of the annealing technique. In this productive phase, the relaxation dynamics successfully function as an optimizer and the relevant characteristics analogous to the stabilization phenomenon and the acceleration of relaxation, which are observed in glass-forming materials, play favorable roles in the present optimization. This nonmonotonic approach also has the advantage of a wider operation range of the effective relaxation dynamics, and in conclusion, the actual annealing strategy is useful and more workable than the conventional slow-cooling strategy, at least for the present TSPs. Further discussion is given of an illuminating aspect of computational physics analysis in the optimization algorithm research.

  12. Clone ordering by simulated annealing: Application to the STS-content map of chromosome 21

    SciTech Connect

    Rigault, P.

    1993-12-31

    This article presents an application of the simulated annealing algorithm used at Genethon for the STS-content map of the chromosome 21. This algorithm is a part of an integrated system which starts from the PCR gel analysis and produces ordered contigs that can be handled with a graphical user interface. For this project, 250 STSs have been used to screen a 14 genome equivalent YAC library. The result is a map of all of the long arm of chromosome 21 (21q). This map contains 210 STSs and 770 YACs and covers a 45 Megabase region with an average resolution of 1 STS/220 kb. The order obtained by simulated annealing is consistent both with genetic data and with other methods of physical mapping (RCRF, alu-PCR). This map will be a powerful tool for gene analysis, especially in the study of Down syndrome and Alzheimer disease.

  13. Molecular dynamics simulation of annealed ZnO surfaces

    SciTech Connect

    Min, Tjun Kit; Yoon, Tiem Leong; Lim, Thong Leng

    2015-04-24

    The effect of thermally annealing a slab of wurtzite ZnO, terminated by two surfaces, (0001) (which is oxygen-terminated) and (0001{sup ¯}) (which is Zn-terminated), is investigated via molecular dynamics simulation by using reactive force field (ReaxFF). We found that upon heating beyond a threshold temperature of ∼700 K, surface oxygen atoms begin to sublimate from the (0001) surface. The ratio of oxygen leaving the surface at a given temperature increases as the heating temperature increases. A range of phenomena occurring at the atomic level on the (0001) surface has also been explored, such as formation of oxygen dimers on the surface and evolution of partial charge distribution in the slab during the annealing process. It was found that the partial charge distribution as a function of the depth from the surface undergoes a qualitative change when the annealing temperature is above the threshold temperature.

  14. Ranking important nodes in complex networks by simulated annealing

    NASA Astrophysics Data System (ADS)

    Sun, Yu; Yao, Pei-Yang; Wan, Lu-Jun; Shen, Jian; Zhong, Yun

    2017-02-01

    In this paper, based on simulated annealing a new method to rank important nodes in complex networks is presented. First, the concept of an importance sequence (IS) to describe the relative importance of nodes in complex networks is defined. Then, a measure used to evaluate the reasonability of an IS is designed. By comparing an IS and the measure of its reasonability to a state of complex networks and the energy of the state, respectively, the method finds the ground state of complex networks by simulated annealing. In other words, the method can construct a most reasonable IS. The results of experiments on real and artificial networks show that this ranking method not only is effective but also can be applied to different kinds of complex networks. Project supported by the National Natural Science Foundation of China (Grant No. 61573017) and the Natural Science Foundation of Shaanxi Province, China (Grant No. 2016JQ6062).

  15. Handling time-expensive global optimization problems through the surrogate-enhanced evolutionary annealing-simplex algorithm

    NASA Astrophysics Data System (ADS)

    Tsoukalas, Ioannis; Kossieris, Panagiotis; Efstratiadis, Andreas; Makropoulos, Christos

    2015-04-01

    In water resources optimization problems, the calculation of the objective function usually presumes to first run a simulation model and then evaluate its outputs. In several cases, however, long simulation times may pose significant barriers to the optimization procedure. Often, to obtain a solution within a reasonable time, the user has to substantially restrict the allowable number of function evaluations, thus terminating the search much earlier than required by the problem's complexity. A promising novel strategy to address these shortcomings is the use of surrogate modelling techniques within global optimization algorithms. Here we introduce the Surrogate-Enhanced Evolutionary Annealing-Simplex (SE-EAS) algorithm that couples the strengths of surrogate modelling with the effectiveness and efficiency of the EAS method. The algorithm combines three different optimization approaches (evolutionary search, simulated annealing and the downhill simplex search scheme), in which key decisions are partially guided by numerical approximations of the objective function. The performance of the proposed algorithm is benchmarked against other surrogate-assisted algorithms, in both theoretical and practical applications (i.e. test functions and hydrological calibration problems, respectively), within a limited budget of trials (from 100 to 1000). Results reveal the significant potential of using SE-EAS in challenging optimization problems, involving time-consuming simulations.

  16. Efficient kinetic Monte Carlo simulation of annealing in semiconductor materials

    NASA Astrophysics Data System (ADS)

    Hargrove, Paul Hamilton

    As the semiconductor manufacturing industry advances, the length scales of devices are shrinking rapidly, in accordance with the predictions of Moore's Law. As the device dimensions shrink the importance of predictive process modeling to the development of the production process is growing. Of particular importance are predictive models which can be applied to process conditions not easily accessible via experiment. Therefore the importance of models based on physical understanding are gaining importance versus models based on empirical fits alone. One promising research area in physical-based models is kinetic Monte Carlo (kMC) modeling of atomistic processes. This thesis explores kMC modeling of annealing and diffusion processes. After providing the necessary background to understand and motivate the research, a detailed review of simulation using this class of models is presented which exposes the motivation for using these models and establishes the state of the field. The author provides a user's manual for ANISRA ( ANnealIng Simulation libRAry), a computer code for on-lattice kMC simulations. This library is intended as a reusable tool for the development of simulation codes for atomistic models covering a wide variety of problems. Thus care has been taken to separate the core functionality of a simulation from the specification of the model. This thesis also compares the performance of data structures for the kMC simulation problem and recommends some novel approaches. These recommendations are applicable to a wider class of model than is ANISRA, and thus of potential interest even to researchers who implement their own simulators. Three example simulations are built from ANISRA and are presented to show the applicability of this class of model to problems of interest in semiconductor process modeling. The differences between the models simulated display the versatility of the code library. The small amount of code written to construct and modify these

  17. Stochastic annealing simulations of defect interactions among subcascades

    SciTech Connect

    Heinisch, H.L.; Singh, B.N.

    1997-04-01

    The effects of the subcascade structure of high energy cascades on the temperature dependencies of annihilation, clustering and free defect production are investigated. The subcascade structure is simulated by closely spaced groups of lower energy MD cascades. The simulation results illustrate the strong influence of the defect configuration existing in the primary damage state on subsequent intracascade evolution. Other significant factors affecting the evolution of the defect distribution are the large differences in mobility and stability of vacancy and interstitial defects and the rapid one-dimensional diffusion of small, glissile interstitial loops produced directly in cascades. Annealing simulations are also performed on high-energy, subcascade-producing cascades generated with the binary collision approximation and calibrated to MD results.

  18. Discrete-State Simulated Annealing For Traveling-Wave Tube Slow-Wave Circuit Optimization

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Bulson, Brian A.; Kory, Carol L.; Williams, W. Dan (Technical Monitor)

    2001-01-01

    Algorithms based on the global optimization technique of simulated annealing (SA) have proven useful in designing traveling-wave tube (TWT) slow-wave circuits for high RF power efficiency. The characteristic of SA that enables it to determine a globally optimized solution is its ability to accept non-improving moves in a controlled manner. In the initial stages of the optimization, the algorithm moves freely through configuration space, accepting most of the proposed designs. This freedom of movement allows non-intuitive designs to be explored rather than restricting the optimization to local improvement upon the initial configuration. As the optimization proceeds, the rate of acceptance of non-improving moves is gradually reduced until the algorithm converges to the optimized solution. The rate at which the freedom of movement is decreased is known as the annealing or cooling schedule of the SA algorithm. The main disadvantage of SA is that there is not a rigorous theoretical foundation for determining the parameters of the cooling schedule. The choice of these parameters is highly problem dependent and the designer needs to experiment in order to determine values that will provide a good optimization in a reasonable amount of computational time. This experimentation can absorb a large amount of time especially when the algorithm is being applied to a new type of design. In order to eliminate this disadvantage, a variation of SA known as discrete-state simulated annealing (DSSA), was recently developed. DSSA provides the theoretical foundation for a generic cooling schedule which is problem independent, Results of similar quality to SA can be obtained, but without the extra computational time required to tune the cooling parameters. Two algorithm variations based on DSSA were developed and programmed into a Microsoft Excel spreadsheet graphical user interface (GUI) to the two-dimensional nonlinear multisignal helix traveling-wave amplifier analysis program TWA3

  19. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  20. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  1. Selecting magnet laminations recipes using the method of simulated annealing

    SciTech Connect

    Russell, A.D.; Baiod, R.; Brown, B.C.

    1997-05-01

    The Fermilab Main Injector project is building 344 dipoles using more than 7000 tons of steel. There were significant run-to-run variations in the magnetic properties of the steel. Differences in stress relief in the steel after stamping resulted in variations of gap height. To minimize magnet-to-magnet strength and field shape variations the laminations were shuffled based on the available magnetic and mechanical data and assigned to magnets using a computer program based on the method of simulated annealing. The lamination sets selected by the program have produced magnets which easily satisfy the design requirements. This paper discussed observed gap variations, the program structure and the strength uniformity results for the magnets produced.

  2. Simulated annealing and stochastic learning in optical neural nets: An optical Boltzmann machine

    SciTech Connect

    Shae, Zonyin.

    1989-01-01

    This dissertation deals with the study of stochastic learning and neural computation in opto-electronic hardware. It presents the first demonstration of a fully operational optical learning machine. Learning in the machine is stochastic taking place in a self-organized multi-layered opto-electronic neural net with plastic connectivity weights that are formed in a programmable non-volatile spatial light modulator. Operation of the machine is made possible by two developments in this work: (a) Fast annealing by optically induced tremors in the energy landscape of the net. The objective of this scheme is to exploit the parallelism of the optical noise pattern so as to speed up the simulated annealing process. The procedure can be viewed as that of generating controlled gradually decreasing deformations or tremors in the energy landscape of the net that prevents entrapment in a local minimum energy state. Both the random drawing of neurons and the state update of the net are now done in parallel at the same time and without having to computer explicitly the change in the energy of the net and associated Boltzmann factor as required ordinarily in the Metropolis-Kirkpartrik simulated annealing algorithm. This leads to significant acceleration of the annealing process. (b) Stochastic learning with binary weights. Learning in opto-electronic neural nets can be simplified greatly if binary weights can be used. A third development, that is the development of schemes for driving and enhancing the frame rate of magneto-optic spatial light modulators, can make the machine learning speed potentially fast. Details of these developments together with the principle, architecture, structure, and performance evaluation of this machine are given.

  3. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  4. An efficient optimum Latin hypercube sampling technique based on sequencing optimisation using simulated annealing

    NASA Astrophysics Data System (ADS)

    Pholdee, Nantiwat; Bureerat, Sujin

    2015-07-01

    This paper proposes a new optimal Latin hypercube sampling method (OLHS) for design of a computer experiment. The new method is based on solving sequencing and continuous optimisation using simulated annealing. There are two sets of design variables used in the optimisation process: sequencing and real number variables. The special mutation operator is developed to deal with such design variables. The performance of the proposed numerical strategy is tested and compared with three established OLHS methods, namely genetic algorithm (GA), enhanced stochastic evolutionary algorithm (ESEA) and successive local enumeration (SLE). Based on 30 test problems with various design dimensions and numbers of sampling points, the proposed method gives the best results. The method can generate an optimum set of sampling points within reasonable computing time; therefore, it can be considered as a powerful tool for design of computer experiments.

  5. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking

    PubMed Central

    Jha, Sumit K.; Jha, Susmit; Langmead, Christopher J.

    2015-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model. PMID:24989866

  6. Comparative Analysis of Simulated Annealing (SA) and Simplified Generalized SA (SGSA) for Estimation Optimal of Parametric Functional in CATIVIC

    SciTech Connect

    Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando

    2009-08-13

    Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.

  7. APL simulation of Grover's algorithm

    NASA Astrophysics Data System (ADS)

    Lipovaca, Samir

    2012-02-01

    Grover's algorithm is a fast quantum search algorithm. Classically, to solve the search problem for a search space of size N we need approximately N operations. Grover's algorithm offers a quadratic speedup. Since present quantum computers are not robust enough for code writing and execution, to experiment with Grover's algorithm, we will simulate it using the APL programming language. The APL programming language is especially suited for this task. For example, to compute Walsh-Hadamard transformation matrix for N quantum states via a tensor product of N Hadamard matrices we need to iterate N-1 times only one line of the code. Initial study indicates the quantum mechanical amplitude of the solution is almost independent of the search space size and rapidly reaches 0.999 values with slight variations at higher decimal places.

  8. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    SciTech Connect

    Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif

    2015-02-03

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.

  9. Theoretical simulations of I-center annealing in KCl crystals

    NASA Astrophysics Data System (ADS)

    Popov, A. I.; Kotomin, E. A.; Eglitis, R. I.

    1995-12-01

    This paper focus on theory of diffusion-controlled annealing of the most mobile radiation-induced defects?I centers?in KCl crystals. The kinetics of annealing of pairs of close oppositely charged defects?α-I centers (arising as a result of the tunnelling recombination of primary Frenkel defects?F and H centers) and F-I centers (when H center trap electrons) is calculated taking into account defect diffusion and Coulomb/elastic interaction. Special attention is paid to the conditions under which multi-stage annealing arises; theoretical results are compared with the relevant experimental data.

  10. OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2014-03-31

    The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.

  11. Application of simulated annealing to some seismic problems

    NASA Astrophysics Data System (ADS)

    Velis, Danilo Ruben

    Wavelet estimation, ray tracing, and traveltime inversion are fundamental problems in seismic exploration. They can be finally reduced to minimizing a highly nonlinear cost function with respect to a certain set of unknown parameters. I use simulated annealing (SA) to avoid local minima and inaccurate solutions often arising by the use of linearizing methods. I illustrate all applications using numerical and/or real data examples. The first application concerns the 4th-order cumulant matching (CM) method for wavelet estimation. Here the reliability of the derived wavelets depends strongly on the amount of data. Tapering the trace cumulant estimate reduces significantly this dependency, and allows for a trace-by-trace implementation. For this purpose, a hybrid strategy that combines SA and gradient-based techniques provides efficiency and accuracy. In the second application I present SART (SA ray tracing), which is a novel method for solving the two-point ray tracing problem. SART overcomes some well known difficulties in standard methods, such as the selection of new take-off angles, and the multipathing problem. SA finds the take-off angles so that the total traveltime between the endpoints is a global minimum. SART is suitable for tracing direct, reflected, and headwaves, through complex 2-D and 3-D media. I also develop a versatile model representation in terms of a number of regions delimited by curved interfaces. Traveltime tomography is the third SA application. I parameterize the subsurface geology by using adaptive-grid bicubic B-splines for smooth models, or parametric 2-D functions for anomaly bodies. The second approach may find application in archaeological and other near-surface studies. The nonlinear inversion process attempts to minimize the rms error between observed and predicted traveltimes.

  12. Effective 3D protein structure prediction with local adjustment genetic-annealing algorithm.

    PubMed

    Zhang, Xiao-Long; Lin, Xiao-Li

    2010-09-01

    The protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing the energy function. The protein folding structure prediction is computationally challenging and has been shown to be NP-hard problem when the 3D off-lattice AB model is employed. In this paper, the local adjustment genetic-annealing (LAGA) algorithm was used to search the ground state of 3D offlattice AB model for protein folding structure. The algorithm included an improved crossover strategy and an improved mutation strategy, where a local adjustment strategy was also used to enhance the searching ability. The experiments were carried out with the Fibonacci sequences. The experimental results demonstrate that the LAGA algorithm appears to have better performance and accuracy compared to the previous methods.

  13. Picosecond and nanosecond laser annealing and simulation of amorphous silicon thin films for solar cell applications

    NASA Astrophysics Data System (ADS)

    Theodorakos, I.; Zergioti, I.; Vamvakas, V.; Tsoukalas, D.; Raptis, Y. S.

    2014-01-01

    In this work, a picosecond diode pumped solid state laser and a nanosecond Nd:YAG laser have been used for the annealing and the partial nano-crystallization of an amorphous silicon layer. These experiments were conducted as an alternative/complementary to plasma-enhanced chemical vapor deposition method for fabrication of micromorph tandem solar cell. The laser experimental work was combined with simulations of the annealing process, in terms of temperature distribution evolution, in order to predetermine the optimum annealing conditions. The annealed material was studied, as a function of several annealing parameters (wavelength, pulse duration, fluence), as far as it concerns its structural properties, by X-ray diffraction, SEM, and micro-Raman techniques.

  14. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    1989-01-01

    Inaccuracies in the length of members and the diameters of joints of large truss reflector backup structures may produce unacceptable levels of surface distortion and member forces. However, if the member lengths and joint diameters can be measured accurately it is possible to configure the members and joints so that root-mean-square (rms) surface error and/or rms member forces is minimized. Following Greene and Haftka (1989) it is assumed that the force vector f is linearly proportional to the member length errors e(sub M) of dimension NMEMB (the number of members) and joint errors e(sub J) of dimension NJOINT (the number of joints), and that the best-fit displacement vector d is a linear function of f. Let NNODES denote the number of positions on the surface of the truss where error influences are measured. The solution of the problem is discussed. To classify, this problem was compared to a similar combinatorial optimization problem. In particular, when only the member length errors are considered, minimizing d(sup 2)(sub rms) is equivalent to the quadratic assignment problem. The quadratic assignment problem is a well known NP-complete problem in operations research literature. Hence minimizing d(sup 2)(sub rms) is is also an NP-complete problem. The focus of the research is the development of a simulated annealing algorithm to reduce d(sup 2)(sub rms). The plausibility of this technique is its recent success on a variety of NP-complete combinatorial optimization problems including the quadratic assignment problem. A physical analogy for simulated annealing is the way liquids freeze and crystallize. All computational experiments were done on a MicroVAX. The two interchange heuristic is very fast but produces widely varying results. The two and three interchange heuristic provides less variability in the final objective function values but runs much more slowly. Simulated annealing produced the best objective function values for every starting configuration and

  15. Formation Algorithms and Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward

    2004-01-01

    Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.

  16. Studies of molecular docking between fibroblast growth factor and heparin using generalized simulated annealing

    NASA Astrophysics Data System (ADS)

    Pita, Samuel Silva Da Rocha; Fernandes, Tácio Vinício Amorim; Caffarena, Ernesto Raul; Pascutti, Pedro Geraldo

    Since the middle 70s, the main molecular docking problem consists in limitations to treat adequately the degrees of freedom of protein (or a receptor) due to the energy landscape roughness and the high computational cost. Until recently, only few algorithms considering flexible simultaneously both ligand and receptor at low computational cost were developed. As a recent proposed Statistical Mechanics, generalized simulated annealing (GSA) has been employed at diverse works concerning global optimization problems. In this work, we used this method exploring the molecular docking problem taking into account the FGF-2 and heparin complex. Since the requirements of an efficient docking algorithm are accuracy and velocity, we tested the influence of GSA parameters qA (new configuration acceptance index), qV (energy surface visiting index), and qT (temperature decreasing control) on the performance of GSADOCK program. Our simulations showed that as temperature parameter qT increases, qA parameter follows this behavior in the interval ranging from 1.1 to 2.3. We found that the GSA parameters have the best performance for the qA values ranging from 1.1 to 1.3, qV values from 1.3 to 1.5, and qT values from 1.1 to 1.7. Most of good qV values were equal or next the good qT values. Finally, the implemented algorithm is trustworthy and can be employed as a tool of molecular modeling methods. The final version of the program will be free of charge and will be accessible at our home-page or could be requested to the authors for e-mail.

  17. Optimization of Sample Points for Monitoring Arable Land Quality by Simulated Annealing while Considering Spatial Variations

    PubMed Central

    Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng

    2016-01-01

    With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051

  18. Fast simulated annealing inversion of surface waves on pavement using phase-velocity spectra

    USGS Publications Warehouse

    Ryden, N.; Park, C.B.

    2006-01-01

    The conventional inversion of surface waves depends on modal identification of measured dispersion curves, which can be ambiguous. It is possible to avoid mode-number identification and extraction by inverting the complete phase-velocity spectrum obtained from a multichannel record. We use the fast simulated annealing (FSA) global search algorithm to minimize the difference between the measured phase-velocity spectrum and that calculated from a theoretical layer model, including the field setup geometry. Results show that this algorithm can help one avoid getting trapped in local minima while searching for the best-matching layer model. The entire procedure is demonstrated on synthetic and field data for asphalt pavement. The viscoelastic properties of the top asphalt layer are taken into account, and the inverted asphalt stiffness as a function of frequency compares well with laboratory tests on core samples. The thickness and shear-wave velocity of the deeper embedded layers are resolved within 10% deviation from those values measured separately during pavement construction. The proposed method may be equally applicable to normal soil site investigation and in the field of ultrasonic testing of materials. ?? 2006 Society of Exploration Geophysicists.

  19. Fast simulated annealing with a multivariate Cauchy distribution and the configuration's initial temperature

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Yong

    2015-05-01

    We propose a multi-dimensional fast simulated annealing method based on a multivariate Cauchy probability distribution and an initial temperature estimated from the configuration's variation. While conventional multi-dimensional fast simulated annealing adopts the product of onedimensional random variables generated by a univariate Cauchy distribution, the proposed method generates a random vector from a multivariate Cauchy distribution. In this way, fast simulated annealing for a multi-dimensional problem maintains the same annealing schedule as that for the one-dimensional case. The proposed method also utilizes the initial temperature estimated from the configuration's variation to generate a candidate state in addition to the conventional initial temperature derived from the variation of the objective function for the acceptance probability. The proposed method is shown not only to guarantee a fast annealing schedule but also to enhance the search capability. The proposed method was tested against the optimization of real-valued functions. We empirically found that the configuration's initial temperature, together with multivariate Cauchy distribution, is more suitable than the conventional scheme for a fast annealing schedule. Moreover, the proposed method outperforms the conventional one in optimization problems having many variables.

  20. Exploration of DGVM Parameter Solution Space Using Simulated Annealing: Implications for Forecast Uncertainties

    NASA Astrophysics Data System (ADS)

    Wells, J. R.; Kim, J. B.

    2011-12-01

    Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that

  1. Identifying fracture-zone geometry using simulated annealing and hydraulic-connection data

    USGS Publications Warehouse

    Day-Lewis, F. D.; Hsieh, P.A.; Gorelick, S.M.

    2000-01-01

    A new approach is presented to condition geostatistical simulation of high-permeability zones in fractured rock to hydraulic-connection data. A simulated-annealing algorithm generates three-dimensional (3-D) realizations conditioned to borehole data, inferred hydraulic connections between packer-isolated borehole intervals, and an indicator (fracture zone or background-K bedrock) variogram model of spatial variability. We apply the method to data from the U.S. Geological Survey Mirror Lake Site in New Hampshire, where connected high-permeability fracture zones exert a strong control on fluid flow at the hundred-meter scale. Single-well hydraulic-packer tests indicate where permeable fracture zones intersect boreholes, and multiple-well pumping tests indicate the degree of hydraulic connection between boreholes. Borehole intervals connected by a fracture zone exhibit similar hydraulic responses, whereas intervals not connected by a fracture zone exhibit different responses. Our approach yields valuable insights into the 3-D geometry of fracture zones at Mirror Lake. Statistical analysis of the realizations yields maps of the probabilities of intersecting specific fracture zones with additional wells. Inverse flow modeling based on the assumption of equivalent porous media is used to estimate hydraulic conductivity and specific storage and to identify those fracture-zone geometries that are consistent with hydraulic test data.

  2. Laser annealing and simulation of amorphous silicon thin films for solar cell applications

    NASA Astrophysics Data System (ADS)

    Theodorakos, I.; Raptis, Y. S.; Vamvakas, V.; Tsoukalas, D.; Zergioti, I.

    2014-03-01

    In this work, a picosecond DPSS and a nanosecond Nd:YAG laser have been used for the annealing and the partial nanocrystallization of an amorphous silicon layer. These experiments were conducted in order to improve the characteristics of a micromorph tandem solar cell. The laser annealing was attempted at 1064nm in order to obtain the desired crystallization's depth and ratios. Preliminary annealing-processes, with different annealing parameters, have been tested, such as fluence, repetition rate and number of pulses. Irradiations were applied in the sub-melt regime, in order to prevent significant diffusion of p- and n-dopants to take place within the structure. The laser experimental work was combined with simulations of the laser annealing process, in terms of temperature distribution evolution, using the Synopsys Sentaurus Process TCAD software. The optimum annealing conditions for the two different pulse durations were determined. Experimentally determined optical properties of our samples, such as the absorption coefficient and reflectivity, were used for a more realistic simulation. From the simulations results, a temperature profile, appropriate to yield the desired recrystallization was obtained for the case of ps pulses, which was verified from the experimental results described below. The annealed material was studied, as far as it concerns its structural properties, by XRD, SEM and micro-Raman techniques, providing consistent information on the characteristics of the nanocrystalline material produced by the laser annealing experiments. It was found that, with the use of ps pulses, the resultant polycrystalline region shows crystallization's ratios similar to a PECVD developed poly-Silicon layer, with slightly larger nanocrystallite's size.

  3. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  4. A Simulated Annealing Methodology to Multiproduct Capacitated Facility Location with Stochastic Demand

    PubMed Central

    Xiang, Hui; Ye, Yong; Ni, Linglin

    2015-01-01

    A stochastic multiproduct capacitated facility location problem involving a single supplier and multiple customers is investigated. Due to the stochastic demands, a reasonable amount of safety stock must be kept in the facilities to achieve suitable service levels, which results in increased inventory cost. Based on the assumption of normal distributed for all the stochastic demands, a nonlinear mixed-integer programming model is proposed, whose objective is to minimize the total cost, including transportation cost, inventory cost, operation cost, and setup cost. A combined simulated annealing (CSA) algorithm is presented to solve the model, in which the outer layer subalgorithm optimizes the facility location decision and the inner layer subalgorithm optimizes the demand allocation based on the determined facility location decision. The results obtained with this approach shown that the CSA is a robust and practical approach for solving a multiple product problem, which generates the suboptimal facility location decision and inventory policies. Meanwhile, we also found that the transportation cost and the demand deviation have the strongest influence on the optimal decision compared to the others. PMID:25834839

  5. Optimisation of PM scheduling for multi-component systems - a simulated annealing approach

    NASA Astrophysics Data System (ADS)

    Doostparast, Mohammad; Kolahan, Farhad; Doostparast, Mahdi

    2015-05-01

    Proper planning of preventive maintenance (PM) is crucial in many industries such as oil transmission pipelines, automotive and food industries. A critical decision in the PM plans is to determine frequencies and types of maintenance actions in order to achieve a certain level of system availability with a minimum total cost. In this paper, we consider the problem of obtaining availability-based non-periodic optimal PM planning for systems with deteriorating components. The objective is to sustain a certain level of availability with the minimal total maintenance-related costs. In the proposed approach, the planning horizon is divided into some inspection periods of equal intervals. For any given interval, a decision must be made to perform one of the three actions on each component; inspection, preventive repair and preventive replacement. Any of these activities has different effects on the reliability of the components and the corresponding distinct costs based on the required recourses. The cost function includes the cost for repair, replacement, system downtime and random failures. System availability and PM resources are the main constraints considered. Since the proposed model is combinatorial in nature involving non-linear decision variables, a simulated annealing algorithm is employed to provide good solutions within a reasonable time.

  6. A simulated annealing methodology to multiproduct capacitated facility location with stochastic demand.

    PubMed

    Qin, Jin; Xiang, Hui; Ye, Yong; Ni, Linglin

    2015-01-01

    A stochastic multiproduct capacitated facility location problem involving a single supplier and multiple customers is investigated. Due to the stochastic demands, a reasonable amount of safety stock must be kept in the facilities to achieve suitable service levels, which results in increased inventory cost. Based on the assumption of normal distributed for all the stochastic demands, a nonlinear mixed-integer programming model is proposed, whose objective is to minimize the total cost, including transportation cost, inventory cost, operation cost, and setup cost. A combined simulated annealing (CSA) algorithm is presented to solve the model, in which the outer layer subalgorithm optimizes the facility location decision and the inner layer subalgorithm optimizes the demand allocation based on the determined facility location decision. The results obtained with this approach shown that the CSA is a robust and practical approach for solving a multiple product problem, which generates the suboptimal facility location decision and inventory policies. Meanwhile, we also found that the transportation cost and the demand deviation have the strongest influence on the optimal decision compared to the others.

  7. Simulated Annealing-Extended Sampling for Multicomponent Decomposition of Spectral Data of DNA Complexed with Peptide

    NASA Astrophysics Data System (ADS)

    Kang, Jiyoung; Yamasaki, Kazuhiko; Sano, Kuniaki; Tsutsui, Ken; Tsutsui, Kimiko M.; Tateno, Masaru

    2017-01-01

    Theoretical analyses of multivariate data have become increasingly important in various scientific disciplines. The multivariate curve resolution alternating least-squares (MCR-ALS) method is an integrated and systematic tool to decompose such various types of spectral data to several pure spectra, corresponding to distinct species. However, in the present study, the MCR-ALS calculation provided only unreasonable solutions, when used to process the circular dichroism spectra of double-stranded DNA (228 bp) in the complex with a DNA-binding peptide under various concentrations. To resolve this problem, we developed an algorithm by including a simulated annealing (SA) protocol (the SA-MCR-ALS method), to facilitate the expansion of the sampling space. The analysis successfully decomposed the aforementioned data into three reasonable pure spectra. Thus, our SA-MCR-ALS scheme provides a useful tool for effective extended sampling, to investigate the substantial and detailed properties of various forms of multivariate data with significant difficulties in the degrees of freedom.

  8. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  9. Stochastic annealing simulation of copper under neutron irradiation

    SciTech Connect

    Heinisch, H.L.; Singh, B.N.

    1998-03-01

    This report is a summary of a presentation made at ICFRM-8 on computer simulations of defect accumulation during irradiation of copper to low doses at room temperature. The simulation results are in good agreement with experimental data on defect cluster densities in copper irradiated in RTNS-II.

  10. Obstacle Bypassing in Optimal Ship Routing Using Simulated Annealing

    SciTech Connect

    Kosmas, O. T.; Vlachos, D. S.; Simos, T. E.

    2008-11-06

    In this paper we are going to discuss a variation on the problem of finding the shortest path between two points in optimal ship routing problems consisting of obstacles that are not allowed to be crossed by the path. Our main goal are going to be the construction of an appropriate algorithm, based in an earlier work by computing the shortest path between two points in the plane that avoids a set of polygonal obstacles.

  11. An exact accelerated stochastic simulation algorithm.

    PubMed

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-04-14

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 23 power of the number of reaction events in a Galton-Watson process.

  12. Rapid thermal annealing of magnesium implanted GaAs-GaAIAs heterostructures experimental and simulated distributions

    NASA Astrophysics Data System (ADS)

    Ketata, K.; Debrie, R.; Ketata, M.

    1993-01-01

    The use of rapid thermal annealing (RTA) techniques to anneal ion implanted GaAs compounds is expected to have a significant impact on device technology. Due to the short duration of the heat treatment, the implanted impurities may be activated without significant diffusion. For heterojunction bipolar transistor (HBT) applications, high doses of p-type impurities are required to compensate the doping levels of N-GaAlAs emitter and n+ GaAs contact layers. Multi-implantations were chosen to maintain a flat profile down to the base layer. Energies of 30, 60, 150, and 340 keV with doses of 6 × 1013, 9 × 1013,6 × 1014, and 9 × 1014 cm-2, respectively, have been used. Annealing cycles with time durations of a few seconds and temperature in the range of 850 950°C are described. Electrical properties of the annealed samples have been investigated using an electrochemical measurement technique. It was found that hole concentrations as high as 4 × 1019 cm-3 and electrical activities near to 75 percent can be obtained. There is no evident indiffusion and no significant outdiffusion at the optimal annealing conditions. Simulation of multilayer implantations are also carried out by an accurate model available in TITAN 2D process simulator using Pearson IV laws and taking into account the diffusion effects on profile distribution caused by RTA. A first approximation using a simple model allows a rapid evaluation of the data fitting operation. In a second approach, concentration dependent diffusivity and the contribution of the electric field at the interface are covered to perform an improved data fitting of ion implanted and annealed dopant profiles. A comparative study shows a good agreement between experimental and simulated distributions.

  13. 2D Ultrasound Sparse Arrays Multi-Depth Radiation Optimization Using Simulated Annealing and Spiral-Array Inspired Energy Functions.

    PubMed

    Roux, Emmanuel; Ramalli, Alessandro; Tortoli, Piero; Cachard, Christian; Robini, Marc; Liebgott, Herve

    2016-08-24

    Full matrix arrays are excellent tools for 3D ultrasound imaging, but the required number of active elements is too high to be individually controlled by an equal number of scanner channels. The number of active elements is significantly reduced by the sparse array techniques, but the position of the remaining elements must be carefully optimized. This issue is here faced by introducing novel energy functions in the simulated annealing algorithm. At each iteration step of the optimization process, one element is freely translated and the associated radiated pattern is simulated. To control the pressure field behavior at multiple depths, three energy functions inspired by the pressure field radiated by a Blackman-tapered spiral array are introduced. Such energy functions aim at limiting the main lobe width while lowering the side lobe and grating lobe levels at multiple depths. Numerical optimization results illustrate the influence of the number of iterations, pressure measurement points and depths as well as the influence of the energy function definition on the optimized layout. It is also shown that performance close to- or even better than the one provided by a spiral array, here assumed as reference, may be obtained. The finite-time convergence properties of simulated annealing allow the duration of the optimization process to be set in advance.

  14. Back-Analysis of Tunnel Response from Field Monitoring Using Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Vardakos, Sotirios; Gutierrez, Marte; Xia, Caichu

    2016-12-01

    This paper deals with the use of field monitoring data to improve predictions of tunnel response during and after construction from numerical models. Computational models are powerful tools for the performance-based engineering analysis and design of geotechnical structures; however, the main challenge to their use is the paucity of information to establish input data needed to yield reliable predictions that can be used in the design of geotechnical structures. Field monitoring can offer not only the means to verify modeling results but also faster and more reliable ways to determine model parameters and for improving the reliability of model predictions. Back-analysis involves the determination of parameters required in computational models using field-monitored data, and is particularly suited to underground constructions, where more information about ground conditions and response becomes available as the construction progresses. A crucial component of back-analysis is an algorithm to find a set of input parameters that will minimize the difference between predicted and measured performance (e.g., in terms of deformations, stresses, or tunnel support loads). Methods of back-analysis can be broadly classified as direct and gradient-based optimization techniques. An alternative methodology to carry out the nonlinear optimization involved in back-analyses is the use of heuristic techniques. Heuristic methods refer to experience-based techniques for problem-solving, learning, and discovery that find a solution which is not guaranteed to be fully optimal, but good enough for a given set of goals. This paper focuses on the use of the heuristic simulated annealing (SA) method in the back-analysis of tunnel responses from field-monitored data. SA emulates the metallurgical processing of metals such as steel by annealing, which involves a gradual and sufficiently slow cooling of a metal from the heated phase which leads to a final material with a minimum imperfections

  15. Algorithmic quantum simulation of memory effects

    NASA Astrophysics Data System (ADS)

    Alvarez-Rodriguez, U.; Di Candia, R.; Casanova, J.; Sanz, M.; Solano, E.

    2017-02-01

    We propose a method for the algorithmic quantum simulation of memory effects described by integrodifferential evolution equations. It consists in the systematic use of perturbation theory techniques and a Markovian quantum simulator. Our method aims to efficiently simulate both completely positive and nonpositive dynamics without the requirement of engineering non-Markovian environments. Finally, we find that small error bounds can be reached with polynomially scaling resources, evaluated as the time required for the simulation.

  16. Joint Optimization of Vertical Component Gravity and Seismic P-wave First Arrivals by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.

    2015-12-01

    Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could

  17. Quantum Monte Carlo simulation of a particular class of non-stoquastic Hamiltonians in quantum annealing.

    PubMed

    Ohzeki, Masayuki

    2017-01-23

    Quantum annealing is a generic solver of the optimization problem that uses fictitious quantum fluctuation. Its simulation in classical computing is often performed using the quantum Monte Carlo simulation via the Suzuki-Trotter decomposition. However, the negative sign problem sometimes emerges in the simulation of quantum annealing with an elaborate driver Hamiltonian, since it belongs to a class of non-stoquastic Hamiltonians. In the present study, we propose an alternative way to avoid the negative sign problem involved in a particular class of the non-stoquastic Hamiltonians. To check the validity of the method, we demonstrate our method by applying it to a simple problem that includes the anti-ferromagnetic XX interaction, which is a typical instance of the non-stoquastic Hamiltonians.

  18. Quantum Monte Carlo simulation of a particular class of non-stoquastic Hamiltonians in quantum annealing

    NASA Astrophysics Data System (ADS)

    Ohzeki, Masayuki

    2017-01-01

    Quantum annealing is a generic solver of the optimization problem that uses fictitious quantum fluctuation. Its simulation in classical computing is often performed using the quantum Monte Carlo simulation via the Suzuki–Trotter decomposition. However, the negative sign problem sometimes emerges in the simulation of quantum annealing with an elaborate driver Hamiltonian, since it belongs to a class of non-stoquastic Hamiltonians. In the present study, we propose an alternative way to avoid the negative sign problem involved in a particular class of the non-stoquastic Hamiltonians. To check the validity of the method, we demonstrate our method by applying it to a simple problem that includes the anti-ferromagnetic XX interaction, which is a typical instance of the non-stoquastic Hamiltonians.

  19. Quantum Monte Carlo simulation of a particular class of non-stoquastic Hamiltonians in quantum annealing

    PubMed Central

    Ohzeki, Masayuki

    2017-01-01

    Quantum annealing is a generic solver of the optimization problem that uses fictitious quantum fluctuation. Its simulation in classical computing is often performed using the quantum Monte Carlo simulation via the Suzuki–Trotter decomposition. However, the negative sign problem sometimes emerges in the simulation of quantum annealing with an elaborate driver Hamiltonian, since it belongs to a class of non-stoquastic Hamiltonians. In the present study, we propose an alternative way to avoid the negative sign problem involved in a particular class of the non-stoquastic Hamiltonians. To check the validity of the method, we demonstrate our method by applying it to a simple problem that includes the anti-ferromagnetic XX interaction, which is a typical instance of the non-stoquastic Hamiltonians. PMID:28112244

  20. Experimental and Numerical Simulations of Phase Transformations Occurring During Continuous Annealing of DP Steel Strips

    NASA Astrophysics Data System (ADS)

    Wrożyna, Andrzej; Pernach, Monika; Kuziak, Roman; Pietrzyk, Maciej

    2016-04-01

    Due to their exceptional strength properties combined with good workability the Advanced High-Strength Steels (AHSS) are commonly used in automotive industry. Manufacturing of these steels is a complex process which requires precise control of technological parameters during thermo-mechanical treatment. Design of these processes can be significantly improved by the numerical models of phase transformations. Evaluation of predictive capabilities of models, as far as their applicability in simulation of thermal cycles thermal cycles for AHSS is considered, was the objective of the paper. Two models were considered. The former was upgrade of the JMAK equation while the latter was an upgrade of the Leblond model. The models can be applied to any AHSS though the examples quoted in the paper refer to the Dual Phase (DP) steel. Three series of experimental simulations were performed. The first included various thermal cycles going beyond limitations of the continuous annealing lines. The objective was to validate models behavior in more complex cooling conditions. The second set of tests included experimental simulations of the thermal cycle characteristic for the continuous annealing lines. Capability of the models to describe properly phase transformations in this process was evaluated. The third set included data from the industrial continuous annealing line. Validation and verification of models confirmed their good predictive capabilities. Since it does not require application of the additivity rule, the upgrade of the Leblond model was selected as the better one for simulation of industrial processes in AHSS production.

  1. Simulated annealing with restrained molecular dynamics using a flexible restraint potential: theory and evaluation with simulated NMR constraints.

    PubMed Central

    Bassolino-Klimas, D.; Tejero, R.; Krystek, S. R.; Metzler, W. J.; Montelione, G. T.; Bruccoleri, R. E.

    1996-01-01

    A new functional representation of NMR-derived distance constraints, the flexible restraint potential, has been implemented in the program CONGEN (Bruccoleri RE, Karplus M, 1987, Biopolymers 26:137-168) for molecular structure generation. In addition, flat-bottomed restraint potentials for representing dihedral angle and vicinal scalar coupling constraints have been introduced into CONGEN. An effective simulated annealing (SA) protocol that combines both weight annealing and temperature annealing is described. Calculations have been performed using ideal simulated NMR constraints, in order to evaluate the use of restrained molecular dynamics (MD) with these target functions as implemented in CONGEN. In this benchmark study, internuclear distance, dihedral angle, and vicinal coupling constant constraints were calculated from the energy-minimized X-ray crystal structure of the 46-amino acid polypeptide crambin (ICRN). Three-dimensional structures of crambin that satisfy these simulated NMR constraints were generated using restrained MD and SA. Polypeptide structures with extended backbone and side-chain conformations were used as starting conformations. Dynamical annealing calculations using extended starting conformations and assignments of initial velocities taken randomly from a Maxwellian distribution were found to adequately sample the conformational space consistent with the constraints. These calculations also show that loosened internuclear constraints can allow molecules to overcome local minima in the search for a global minimum with respect to both the NMR-derived constraints and conformational energy. This protocol and the modified version of the CONGEN program described here are shown to be reliable and robust, and are applicable generally for protein structure determination by dynamical simulated annealing using NMR data. PMID:8845749

  2. Optimizing the natural connectivity of scale-free networks using simulated annealing

    NASA Astrophysics Data System (ADS)

    Duan, Boping; Liu, Jing; Tang, Xianglong

    2016-09-01

    In real-world networks, the path between two nodes always plays a significant role in the fields of communication or transportation. In some cases, when one path fails, the two nodes cannot communicate any more. Thus, it is necessary to increase alternative paths between nodes. In the recent work (Wu et al., 2011), Wu et al. proposed the natural connectivity as a novel robustness measure of complex networks. The natural connectivity considers the redundancy of alternative paths in a network by computing the number of closed paths of all lengths. To enhance the robustness of networks in terms of the natural connectivity, in this paper, we propose a simulated annealing method to optimize the natural connectivity of scale-free networks without changing the degree distribution. The experimental results show that the simulated annealing method clearly outperforms other local search methods.

  3. Proposal of a checking parameter in the simulated annealing method applied to the spin glass model

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Chiaki

    2016-02-01

    We propose a checking parameter utilizing the breaking of the Jarzynski equality in the simulated annealing method using the Monte Carlo method. This parameter is based on the Jarzynski equality. By using this parameter, to detect that the system is in global minima of the free energy under gradual temperature reduction is possible. Thus, by using this parameter, one is able to investigate the efficiency of annealing schedules. We apply this parameter to the ± J Ising spin glass model. The application to the Gaussian Ising spin glass model is also mentioned. We discuss that the breaking of the Jarzynski equality is induced by the system being trapped in local minima of the free energy. By performing Monte Carlo simulations of the ± J Ising spin glass model and a glassy spin model proposed by Newman and Moore, we show the efficiency of the use of this parameter.

  4. Crosshole Tomography, Waveform Inversion, and Anisotropy: A Combined Approach Using Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Afanasiev, M.; Pratt, R. G.; Kamei, R.; McDowell, G.

    2012-12-01

    Crosshole seismic tomography has been used by Vale to provide geophysical images of mineralized massive sulfides in the Eastern Deeps deposit at Voisey's Bay, Labrador, Canada. To date, these data have been processed using traveltime tomography, and we seek to improve the resolution of these images by applying acoustic Waveform Tomography. Due to the computational cost of acoustic waveform modelling, local descent algorithms are employed in Waveform Tomography; due to non-linearity an initial model is required which predicts first-arrival traveltimes to within a half-cycle of the lowest frequency used. Because seismic velocity anisotropy can be significant in hardrock settings, the initial model must quantify the anisotropy in order to meet the half-cycle criterion. In our case study, significant velocity contrasts between the target massive sulfides and the surrounding country rock led to difficulties in generating an accurate anisotropy model through traveltime tomography, and our starting model for Waveform Tomography failed the half-cycle criterion at large offsets. We formulate a new, semi-global approach for finding the best-fit 1-D elliptical anisotropy model using simulated annealing. Through random perturbations to Thompson's ɛ parameter, we explore the L2 norm of the frequency-domain phase residuals in the space of potential anisotropy models: If a perturbation decreases the residuals, it is always accepted, but if a perturbation increases the residuals, it is accepted with the probability P = exp(-(Ei-E)/T). This is the Metropolis criterion, where Ei is the value of the residuals at the current iteration, E is the value of the residuals for the previously accepted model, and T is a probability control parameter, which is decreased over the course of the simulation via a preselected cooling schedule. Convergence to the global minimum of the residuals is guaranteed only for infinitely slow cooling, but in practice good results are obtained from a variety

  5. Green's-function solutions to dynamical-simulated annealing and steepest-descents equations of motion

    SciTech Connect

    Benedek, R.; Min, B.I.; Garner, J.

    1987-08-01

    Solutions to the dynamical-simulated-annealing and the steepest-descents equations of motion for electron states are presented. The relations proposed by Payne et al. and by Williams and Soler can be obtained from the first-born approximation by applying additional decoupling approximations. A numerical example is presented to contrast the behavior of the Green's function and finite-difference solutions to the steepest-descents dynamics. 14 refs., 2 figs.

  6. Molecular structure matching by simulated annealing. IV. Classification of atom correspondences in sets of dissimilar molecules

    NASA Astrophysics Data System (ADS)

    Papadopoulos, M. C.; Dean, P. M.

    1991-04-01

    A set of 6 molecules, active at the benzodiazepine GABAA site are matched pairwise with one member of the set in turn. Matchings are performed by simulated annealing using null correspondences to reject poorly matched atom positions. Cluster analysis is employed to identify molecular similarities after an optimal molecular superimposition has been discovered. A statistic for the compactness of clustered atom positions is suggested. The introduction of null correspondences causes the clusters of matched atoms to become more compact.

  7. Asymptotics in Time, Temperature and Size for Optimization by Simulated Annealing: Theory, Practice and Applications

    DTIC Science & Technology

    1990-01-19

    and studying the growth of this bound as the tem- perature approaches zero asymptotically. Simulated annealing with a time varying temperature gives...rise to a time inhomogeneous Markov chain. This Markov chain is difficult to analyze and study due to the time-inhomogeneity. We have been able to...problem. Moreover, we can study the growth of this bound as the temperature approaches zero or skewness becomes arbitrarily large; thereby, providing

  8. Annealing of ion irradiated high T{sub C} Josephson junctions studied by numerical simulations

    SciTech Connect

    Sirena, M.; Matzen, S.; Bergeal, N.; Lesueur, J.; Faini, G.; Bernard, R.; Briatico, J.; Crete, D. G.

    2009-01-15

    Recently, annealing of ion irradiated high T{sub c} Josephson iunctions (JJs) has been studied experimentally in the perspective of improving their reproducibility. Here we present numerical simulations based on random walk and Monte Carlo calculations of the evolution of JJ characteristics such as the transition temperature T{sub c}{sup '} and its spread {delta}T{sub c}{sup '}, and compare them with experimental results on junctions irradiated with 100 and 150 keV oxygen ions, and annealed at low temperatures (below 80 deg. C). We have successfully used a vacancy-interstitial annihilation mechanism to describe the evolution of the T{sub c}{sup '} and the homogeneity of a JJ array, analyzing the evolution of the defects density mean value and its distribution width. The annealing first increases the spread in T{sub c}{sup '} for short annealing times due to the stochastic nature of the process, but then tends to reduce it for longer times, which is interesting for technological applications.

  9. Improve earthquake hypocenter using adaptive simulated annealing inversion in regional tectonic, volcano tectonic, and geothermal observation

    SciTech Connect

    Ry, Rexha Verdhora; Nugraha, Andri Dian

    2015-04-24

    Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.

  10. Validation of Sensor-Directed Spatial Simulated Annealing Soil Sampling Strategy.

    PubMed

    Scudiero, Elia; Lesch, Scott M; Corwin, Dennis L

    2016-07-01

    Soil spatial variability has a profound influence on most agronomic and environmental processes at field and landscape scales, including site-specific management, vadose zone hydrology and transport, and soil quality. Mobile sensors are a practical means of mapping spatial variability because their measurements serve as a proxy for many soil properties, provided a sensor-soil calibration is conducted. A viable means of calibrating sensor measurements over soil properties is through linear regression modeling of sensor and target property data. In the present study, two sensor-directed, model-based, sampling scheme delineation methods were compared to validate recent applications of soil apparent electrical conductivity (EC)-directed spatial simulated annealing against the more established EC-directed response surface sampling design (RSSD) approach. A 6.8-ha study area near San Jacinto, CA, was surveyed for EC, and 30 soil sampling locations per sampling strategy were selected. Spatial simulated annealing and RSSD were compared for sensor calibration to a target soil property (i.e., salinity) and for evenness of spatial coverage of the study area, which is beneficial for mapping nontarget soil properties (i.e., those not correlated with EC). The results indicate that the linear modeling EC-salinity calibrations obtained from the two sampling schemes provided salinity maps characterized by similar errors. The maps of nontarget soil properties show similar errors across sampling strategies. The Spatial Simulated Annealing methodology is, therefore, validated, and its use in agronomic and environmental soil science applications is justified.

  11. Improve earthquake hypocenter using adaptive simulated annealing inversion in regional tectonic, volcano tectonic, and geothermal observation

    NASA Astrophysics Data System (ADS)

    Ry, Rexha Verdhora; Nugraha, Andri Dian

    2015-04-01

    Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger's method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger's result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.

  12. Object Kinetic Monte Carlo Simulations of Annealing of Cascade Damage in Tungsten

    NASA Astrophysics Data System (ADS)

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard; Kurtz, Richard; Roche, Kenneth; Roche, Brian

    2013-10-01

    Results are presented for a series of annealing simulations of displacement cascades in tungsten (W) using the kinetic Monte Carlo (KMC) code kSOME (kinetic simulation of microstructure evolution), which is our newly developed lattice-based Object KMC simulation code. In principle, kSOME can deal with migration, emission, transformation and recombination of all types of intrinsic point defects and their complexes. In addition, the interaction of these point defects with sinks such as dislocations, grain boundaries and free surfaces is also treated. We have studied the long-time annealing of displacement cascades in W obtained from molecular dynamics (MD) simulations. A database of displacement cascades in W was obtained using MD simulations for temperatures of 800-1300K and primary knock-on atom (PKA) energies in the range of 2 to 40 keV. The input data for the KMC simulations, such as activation energies for migration and dissociation of defects, and their capture radii were obtained from atomic-level calculations. The evolution of radiation damage was investigated as a function of time, temperature, dose and dose-rate. The results for W are compared with those for similar simulations of cascades in α-Fe.

  13. An exact accelerated stochastic simulation algorithm

    NASA Astrophysics Data System (ADS)

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-04-01

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2/3 power of the number of reaction events in a Galton-Watson process.

  14. An exact accelerated stochastic simulation algorithm

    PubMed Central

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-01-01

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present “ER-leap” algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2∕3 power of the number of reaction events in a Galton–Watson process. PMID:19368432

  15. Assessment of a fuzzy based flood forecasting system optimized by simulated annealing

    NASA Astrophysics Data System (ADS)

    Reyhani Masouleh, Aida; Pakosch, Sabine; Disse, Markus

    2010-05-01

    Flood forecasting is an important tool to mitigate harmful effects of floods. Among the many different approaches for forecasting, Fuzzy Logic (FL) is one that has been increasingly applied over the last decade. This method is principally based on the linguistic description of Rule Systems (RS). A RS is a specific combination of membership functions of input and output variables. Setting up the RS can be implemented either automatically or manually, the choice of which can strongly influence the resulting rule systems. It is therefore the objective of this study to assess the influence that the parameters of an automated rule generation based on Simulated Annealing (SA) have on the resulting RS. The study area is the upper Main River area, located in the northern part of Bavaria, Germany. The data of Mainleus gauge with area of 1165 km2 was investigated in the whole period of 1984 and 2004. The highest observed discharge of 357 m3/s was recorded in 1995. The input arguments of the FL model were daily precipitation, forecasted precipitation, antecedent precipitation index, temperature and melting rate. The FL model of this study has one output variable, daily discharge and was independently set up for three different forecast lead times, namely one-, two- and three-days ahead. In total, each RS comprised 55 rules and all input and output variables were represented by five sets of trapezoidal and triangular fuzzy numbers. Simulated Annealing, which is a converging optimum solution algorithm, was applied for optimizing the RSs in this study. In order to assess the influence of its parameters (number of iterations, temperature decrease rate, initial value for generating random numbers, initial temperature and two other parameters), they were individually varied while keeping the others fixed. With each of the resulting parameter sets, a full-automatic SA was applied to gain optimized fuzzy rule systems for flood forecasting. Evaluation of the performance of the

  16. Genetic Algorithms for Digital Quantum Simulations.

    PubMed

    Las Heras, U; Alvarez-Rodriguez, U; Solano, E; Sanz, M

    2016-06-10

    We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.

  17. Simulations of Thermal Quantum Annealing on the D-Wave Device

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Vinci, Walter; Mishra, Anurag; Warburton, Paul; Lidar, Daniel

    2014-03-01

    We report on classical and quantum simulations to model the open-system dynamics of the D-Wave programmable annealer as we increase the thermal noise level on the device. We consider three models for the device: (1) the evolution is described by a classical simulated annealer acting on the final-time Ising Hamiltonian; (2) the evolution is described by an O(3) model with a time-dependent Hamiltonian; (3) the evolution is described by a quantum adiabatic Markovian master equation with a time dependent Hamiltonian. We increase the thermal noise level by either decreasing the overall energy scale of the final-time Ising Hamiltonian or by increasing the total annealing time. Using a benchmark Ising Hamiltonian, we show that all three models give distinct predictions for the behavior of the system as the noise level on the device is increased. The only model that captures the results of the device over the entire range of noise levels studied is the quantum master equation, ruling out the two classical models considered here.

  18. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    PubMed Central

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  19. A hierarchical exact accelerated stochastic simulation algorithm

    NASA Astrophysics Data System (ADS)

    Orendorff, David; Mjolsness, Eric

    2012-12-01

    A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.

  20. The systems biology simulation core algorithm

    PubMed Central

    2013-01-01

    Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941

  1. Displacement cascades and defect annealing in tungsten, Part II: Object kinetic Monte Carlo Simulation of Tungsten Cascade Aging

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2015-07-01

    The results of object kinetic Monte Carlo (OKMC) simulations of the annealing of primary cascade damage in bulk tungsten using a comprehensive database of cascades obtained from molecular dynamics (Setyawan et al.) are described as a function of primary knock-on atom (PKA) energy at temperatures of 300, 1025 and 2050 K. An increase in SIA clustering coupled with a decrease in vacancy clustering with increasing temperature, in addition to the disparate mobilities of SIAs versus vacancies, causes an interesting effect of temperature on cascade annealing. The annealing efficiency (the ratio of the number of defects after and before annealing) exhibits an inverse U-shape curve as a function of temperature. The capabilities of the newly developed OKMC code KSOME (kinetic simulations of microstructure evolution) used to carry out these simulations are described.

  2. General simulation algorithm for autocorrelated binary processes

    NASA Astrophysics Data System (ADS)

    Serinaldi, Francesco; Lombardo, Federico

    2017-02-01

    The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.

  3. Annealing effect on thermodynamic and physical properties of mesoporous silicon: A simulation and nitrogen sorption study

    NASA Astrophysics Data System (ADS)

    Kumar, Pushpendra; Huber, Patrick

    2016-04-01

    Discovery of porous silicon formation in silicon substrate in 1956 while electro-polishing crystalline Si in hydrofluoric acid (HF), has triggered large scale investigations of porous silicon formation and their changes in physical and chemical properties with thermal and chemical treatment. A nitrogen sorption study is used to investigate the effect of thermal annealing on electrochemically etched mesoporous silicon (PS). The PS was thermally annealed from 200˚C to 800˚C for 1 hr in the presence of air. It was shown that the pore diameter and porosity of PS vary with annealing temperature. The experimentally obtained adsorption / desorption isotherms show hysteresis typical for capillary condensation in porous materials. A simulation study based on Saam and Cole model was performed and compared with experimentally observed sorption isotherms to study the physics behind of hysteresis formation. We discuss the shape of the hysteresis loops in the framework of the morphology of the layers. The different behavior of adsorption and desorption of nitrogen in PS with pore diameter was discussed in terms of concave menisci formation inside the pore space, which was shown to related with the induced pressure in varying the pore diameter from 7.2 nm to 3.4 nm.

  4. Restoring low resolution structure of biological macromolecules from solution scattering using simulated annealing.

    PubMed Central

    Svergun, D I

    1999-01-01

    A method is proposed to restore ab initio low resolution shape and internal structure of chaotically oriented particles (e.g., biological macromolecules in solution) from isotropic scattering. A multiphase model of a particle built from densely packed dummy atoms is characterized by a configuration vector assigning the atom to a specific phase or to the solvent. Simulated annealing is employed to find a configuration that fits the data while minimizing the interfacial area. Application of the method is illustrated by the restoration of a ribosome-like model structure and more realistically by the determination of the shape of several proteins from experimental x-ray scattering data. PMID:10354416

  5. Neighbourhood generation mechanism applied in simulated annealing to job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Cruz-Chávez, Marco Antonio

    2015-11-01

    This paper presents a neighbourhood generation mechanism for the job shop scheduling problems (JSSPs). In order to obtain a feasible neighbour with the generation mechanism, it is only necessary to generate a permutation of an adjacent pair of operations in a scheduling of the JSSP. If there is no slack time between the adjacent pair of operations that is permuted, then it is proven, through theory and experimentation, that the new neighbour (schedule) generated is feasible. It is demonstrated that the neighbourhood generation mechanism is very efficient and effective in a simulated annealing.

  6. Experimental determination of thermal profiles during laser spike annealing with quantitative comparison to 3-dimensional simulations

    SciTech Connect

    Iyengar, Krishna; Jung, Byungki; Willemann, Michael; Thompson, Michael O.; Clancy, Paulette

    2012-05-21

    Thin film platinum resistors were used to directly measure temperature profiles during laser spike annealing (LSA) with high spatial and temporal resolution. Observed resistance changes were calibrated to absolute temperatures using the melting points of the substrate silicon and thin gold films. Both the time-dependent temperature experienced by the sample during passage of the focussed laser beam and profiles across the spatially dependent laser intensity were obtained with sub-millisecond time resolution and 50 {mu}m spatial resolution. Full 3-dimensional simulations incorporating both optical and thermal variations of material parameters were compared with these results. Accounting properly for the specific material parameters, good agreement between experiments and simulations was achieved. Future temperature measurements in complex environments will permit critical evaluation of LSA simulations methodologies.

  7. Waveform-based simulated annealing of crosshole transmission data: a semi-global method for estimating seismic anisotropy

    NASA Astrophysics Data System (ADS)

    Afanasiev, Michael V.; Pratt, R. Gerhard; Kamei, Rie; McDowell, Glenn

    2014-12-01

    We successfully apply the semi-global inverse method of simulated annealing to determine the best-fitting 1-D anisotropy model for use in acoustic frequency domain waveform tomography. Our forward problem is based on a numerical solution of the frequency domain acoustic wave equation, and we minimize wavefield phase residuals through random perturbations to a 1-D vertically varying anisotropy profile. Both real and synthetic examples are presented in order to demonstrate and validate the approach. For the real data example, we processed and inverted a cross-borehole data set acquired by Vale Technology Development (Canada) Ltd. in the Eastern Deeps deposit, located in Voisey's Bay, Labrador, Canada. The inversion workflow comprises the full suite of acquisition, data processing, starting model building through traveltime tomography, simulated annealing and finally waveform tomography. Waveform tomography is a high resolution method that requires an accurate starting model. A cycle-skipping issue observed in our initial starting model was hypothesized to be due to an erroneous anisotropy model from traveltime tomography. This motivated the use of simulated annealing as a semi-global method for anisotropy estimation. We initially tested the simulated annealing approach on a synthetic data set based on the Voisey's Bay environment; these tests were successful and led to the application of the simulated annealing approach to the real data set. Similar behaviour was observed in the anisotropy models obtained through traveltime tomography in both the real and synthetic data sets, where simulated annealing produced an anisotropy model which solved the cycle-skipping issue. In the real data example, simulated annealing led to a final model that compares well with the velocities independently estimated from borehole logs. By comparing the calculated ray paths and wave paths, we attributed the failure of anisotropic traveltime tomography to the breakdown of the ray

  8. Computational plasticity algorithm for particle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2017-03-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  9. Computational algorithms for simulations in atmospheric optics.

    PubMed

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.

  10. Experimental signature of programmable quantum annealing.

    PubMed

    Boixo, Sergio; Albash, Tameem; Spedalieri, Federico M; Chancellor, Nicholas; Lidar, Daniel A

    2013-01-01

    Quantum annealing is a general strategy for solving difficult optimization problems with the aid of quantum adiabatic evolution. Both analytical and numerical evidence suggests that under idealized, closed system conditions, quantum annealing can outperform classical thermalization-based algorithms such as simulated annealing. Current engineered quantum annealing devices have a decoherence timescale which is orders of magnitude shorter than the adiabatic evolution time. Do they effectively perform classical thermalization when coupled to a decohering thermal environment? Here we present an experimental signature which is consistent with quantum annealing, and at the same time inconsistent with classical thermalization. Our experiment uses groups of eight superconducting flux qubits with programmable spin-spin couplings, embedded on a commercially available chip with >100 functional qubits. This suggests that programmable quantum devices, scalable with current superconducting technology, implement quantum annealing with a surprising robustness against noise and imperfections.

  11. 2-D Ultrasound Sparse Arrays Multidepth Radiation Optimization Using Simulated Annealing and Spiral-Array Inspired Energy Functions.

    PubMed

    Roux, Emmanuel; Ramalli, Alessandro; Tortoli, Piero; Cachard, Christian; Robini, Marc C; Liebgott, Herve

    2016-12-01

    Full matrix arrays are excellent tools for 3-D ultrasound imaging, but the required number of active elements is too high to be individually controlled by an equal number of scanner channels. The number of active elements is significantly reduced by the sparse array techniques, but the position of the remaining elements must be carefully optimized. This issue is faced here by introducing novel energy functions in the simulated annealing (SA) algorithm. At each iteration step of the optimization process, one element is freely translated and the associated radiated pattern is simulated. To control the pressure field behavior at multiple depths, three energy functions inspired by the pressure field radiated by a Blackman-tapered spiral array are introduced. Such energy functions aim at limiting the main lobe width while lowering the side lobe and grating lobe levels at multiple depths. Numerical optimization results illustrate the influence of the number of iterations, pressure measurement points, and depths, as well as the influence of the energy function definition on the optimized layout. It is also shown that performance close to or even better than the one provided by a spiral array, here assumed as reference, may be obtained. The finite-time convergence properties of SA allow the duration of the optimization process to be set in advance.

  12. Kinetic Monte Carlo simulations of boron activation in implanted Si under laser thermal annealing

    NASA Astrophysics Data System (ADS)

    Fisicaro, Giuseppe; Pelaz, Lourdes; Aboy, Maria; Lopez, Pedro; Italia, Markus; Huet, Karim; Cristiano, Filadelfo; Essa, Zahi; Yang, Qui; Bedel-Pereira, Elena; Quillec, Maurice; La Magna, Antonino

    2014-02-01

    We investigate the correlation between dopant activation and damage evolution in boron-implanted silicon under excimer laser irradiation. The dopant activation efficiency in the solid phase was measured under a wide range of irradiation conditions and simulated using coupled phase-field and kinetic Monte Carlo models. With the inclusion of dopant atoms, the presented code extends the capabilities of a previous version, allowing its definitive validation by means of detailed comparisons with experimental data. The stochastic method predicts the post-implant kinetics of the defect-dopant system in the far-from-equilibrium conditions caused by laser irradiation. The simulations explain the dopant activation dynamics and demonstrate that the competitive dopant-defect kinetics during the first laser annealing treatment dominates the activation phenomenon, stabilizing the system against additional laser irradiation steps.

  13. Efficient algorithm for simulation of isoelectric focusing.

    PubMed

    Yoo, Kisoo; Shim, Jaesool; Liu, Jin; Dutta, Prashanta

    2014-03-01

    IEF simulation is an effective tool to investigate the transport phenomena and separation performance as well as to design IEF microchip. However, multidimensional IEF simulations are computationally intensive as one has to solve a large number of mass conservation equations for ampholytes to simulate a realistic case. In this study, a parallel scheme for a 2D IEF simulation is developed to reduce the computational time. The calculation time for each equation is analyzed to identify which procedure is suitable for parallelization. As expected, simultaneous solution of mass conservation equations of ampholytes is identified as the computational hot spot, and the computational time can be significantly reduced by parallelizing the solution procedure for that. Moreover, to optimize the computing time, electric potential behavior during transient state is investigated. It is found that for a straight channel the transient variation of electric potential along the channel is negligible in a narrow pH range (5∼8) IEF. Thus the charge conservation equation is solved for the first time step only, and the electric potential obtain from that is used for subsequent calculations. IEF simulations are carried out using this algorithm for separation of cardiac troponin I from serum albumin in a pH range of 5-8 using 192 biprotic ampholytes. Significant reduction in simulation time is achieved using the parallel algorithm. We also study the effect of number of ampholytes to form the pH gradient and its effect in the focusing and separation behavior of cardiac troponin I and albumin. Our results show that, at the completion of separation phase, the pH profile is stepwise for lower number of ampholytes, but becomes smooth as the number of ampholytes increases. Numerical results also show that higher protein concentration can be obtained using higher number of ampholytes.

  14. Fast computation algorithms for speckle pattern simulation

    SciTech Connect

    Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru

    2013-11-13

    We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.

  15. Cluster hybrid Monte Carlo simulation algorithms.

    PubMed

    Plascak, J A; Ferrenberg, Alan M; Landau, D P

    2002-06-01

    We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.

  16. Cluster hybrid Monte Carlo simulation algorithms

    NASA Astrophysics Data System (ADS)

    Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.

    2002-06-01

    We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.

  17. Parallel algorithm strategies for circuit simulation.

    SciTech Connect

    Thornquist, Heidi K.; Schiek, Richard Louis; Keiter, Eric Richard

    2010-01-01

    Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. However, they have been pushed to their performance limits in addressing circuit design challenges that come from the technology drivers of smaller feature scales and higher integration. Improving the performance of circuit simulation tools through exploiting new opportunities in widely-available multi-processor architectures is a logical next step. Unfortunately, not all traditional simulation applications are inherently parallel, and quickly adapting mature application codes (even codes designed to parallel applications) to new parallel paradigms can be prohibitively difficult. In general, performance is influenced by many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, the use of mini-applications small self-contained proxies for real applications is an excellent approach for rapidly exploring the parameter space of all these choices. In this report we present a multi-core performance study of Xyce, a transistor-level circuit simulation tool, and describe the future development of a mini-application for circuit simulation.

  18. Central line simulation: a new training algorithm.

    PubMed

    Britt, Rebecca C; Reed, Scott F; Britt, L D

    2007-07-01

    Recent development of a partial task simulator for central line placement has altered the training algorithm from one of supervised learning on patients to mannequin-based practice to proficiency before patient interaction. There are little data published on the efficacy of this type of simulator. We reviewed our initial resident experience with central line simulation. Education to proficiency using the CentralLine Man simulator is completed by all interns during orientation. At the completion of training, te residents were asked to complete a voluntary, anonymous questionnaire with a 5-point Likert scale as well as open-ended questions. Additionally, the residents were asked to maintain a log of the initial 10 central lines placed. Retrospective review of the questionnaire and logs were done with analysis of simulator experience as well as initial line experience. Seventeen trainees completed the central line simulation course and returned the initial survey. Before the course, the trainees had placed an average of 0.4 internal jugular (IJ) and 1 subclavian (SC) line. On the simulator, an average of 3 SC attempts and 2.5 IJ attempts led to resident comfort with the procedure. On the first attempt, the vessel was accessed after an average of 1.5 SC and 1.9 IJ needlesticks, which improved to 1 SC and 1.3 IJ by the fifth simulated attempt. A total of 4 pneumothorax and 5 carotid sticks were done. Overall, the residents were highly satisfied with the course with an average score of 4.8 for didactics, 4.8 for equipment, 4.5 for the mannequin, and 4.8 for practice opportunity. Nine of the 11 residents who completed logs felt the simulation improved performance on the patient. On the first patient attempt, an average of 1.8 needlesticks was done with an average of 1.3 by the tenth line. For the first patient line documented in the logs, comfort with the anatomy was rated 3.8 with comfort with the procedure rated 2.8. Central line simulation before actual performance on

  19. Application of simulated annealing to solve multi-objectives for aggregate production planning

    NASA Astrophysics Data System (ADS)

    Atiya, Bayda; Bakheet, Abdul Jabbar Khudhur; Abbas, Iraq Tereq; Bakar, Mohd. Rizam Abu; Soon, Lee Lai; Monsi, Mansor Bin

    2016-06-01

    Aggregate production planning (APP) is one of the most significant and complicated problems in production planning and aim to set overall production levels for each product category to meet fluctuating or uncertain demand in future. and to set decision concerning hiring, firing, overtime, subcontract, carrying inventory level. In this paper, we present a simulated annealing (SA) for multi-objective linear programming to solve APP. SA is considered to be a good tool for imprecise optimization problems. The proposed model minimizes total production and workforce costs. In this study, the proposed SA is compared with particle swarm optimization (PSO). The results show that the proposed SA is effective in reducing total production costs and requires minimal time.

  20. Simulated annealing approach to vascular structure with application to the coronary arteries

    PubMed Central

    Keelan, Jonathan; Chung, Emma M. L.; Hague, James P.

    2016-01-01

    Do the complex processes of angiogenesis during organism development ultimately lead to a near optimal coronary vasculature in the organs of adult mammals? We examine this hypothesis using a powerful and universal method, built on physical and physiological principles, for the determination of globally energetically optimal arterial trees. The method is based on simulated annealing, and can be used to examine arteries in hollow organs with arbitrary tissue geometries. We demonstrate that the approach can generate in silico vasculatures which closely match porcine anatomical data for the coronary arteries on all length scales, and that the optimized arterial trees improve systematically as computational time increases. The method presented here is general, and could in principle be used to examine the arteries of other organs. Potential applications include improvement of medical imaging analysis and the design of vascular trees for artificial organs. PMID:26998317

  1. Efficient algorithms for wildland fire simulation

    NASA Astrophysics Data System (ADS)

    Kondratenko, Volodymyr Y.

    In this dissertation, we develop the multiple-source shortest path algorithms and examine their application importance in real world problems, such as wildfire modeling. The theoretical basis and its implementation in the Weather Research Forecasting (WRF) model coupled with the fire spread code SFIRE (WRF-SFIRE model) are described. We present a data assimilation method that gives the fire spread model the ability to start the fire simulation from an observed fire perimeter instead of an ignition point. While the model is running, the fire state in the model changes in accordance with the new arriving data by data assimilation. As the fire state changes, the atmospheric state (which is strongly effected by heat flux) does not stay consistent with the fire state. The main difficulty of this methodology occurs in coupled fire-atmosphere models, because once the fire state is modified to match a given starting perimeter, the atmospheric circulation is no longer in sync with it. One of the possible solutions to this problem is a formation of the artificial time of ignition history from an earlier fire state, which is later used to replay the fire progression to the new perimeter with the proper heat fluxes fed into the atmosphere, so that the fire induced circulation is established. In this work, we develop efficient algorithms that start from the fire arrival times given at the set of points (called a perimeter) and create the artificial fire time of ignition and fire spread rate history. Different algorithms were developed in order to suit possible demands of the user, such as implementation in parallel programming, minimization of the required amount of iterations and memory use, and use of the rate of spread as a time dependent variable. For the algorithms that deal with the homogeneous rate of spread, it was proven that the values of fire arrival times they produce are optimal. It was also shown that starting from arbitrary initial state the algorithms have

  2. Laser pulse design using optimal control theory-based adaptive simulated annealing technique: vibrational transitions and photo-dissociation

    NASA Astrophysics Data System (ADS)

    Nath, Bikram; Mondal, Chandan Kumar

    2014-08-01

    We have designed and optimised a combined laser pulse using optimal control theory-based adaptive simulated annealing technique for selective vibrational excitations and photo-dissociation. Since proper choice of pulses for specific excitation and dissociation phenomena is very difficult, we have designed a linearly combined pulse for such processes and optimised the different parameters involved in those pulses so that we can get an efficient combined pulse. The technique makes us free from choosing any arbitrary type of pulses and makes a ground to check their suitability. We have also emphasised on how we can improve the performance of simulated annealing technique by introducing an adaptive step length of the different variables during the optimisation processes. We have also pointed out on how we can choose the initial temperature for the optimisation process by introducing heating/cooling step to reduce the annealing steps so that the method becomes cost effective.

  3. Heavy Tails in the Distribution of Time to Solution for Classical and Quantum Annealing*

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Rønnow, Troels F.; Troyer, Matthias

    2015-12-01

    For many optimization algorithms the time to solution depends not only on the problem size but also on the specific problem instance and may vary by many orders of magnitude. It is then necessary to investigate the full distribution and especially its tail. Here, we analyze the distributions of annealing times for simulated annealing and simulated quantum annealing (by path integral quantum Monte Carlo simulation) for random Ising spin glass instances. We find power-law distributions with very heavy tails, corresponding to extremely hard instances, but far broader distributions—and thus worse performance for hard instances—for simulated quantum annealing than for simulated annealing. Fast, nonadiabatic, annealing schedules can improve the performance of simulated quantum annealing for very hard instances by many orders of magnitude.

  4. Heavy Tails in the Distribution of Time to Solution for Classical and Quantum Annealing.

    PubMed

    Steiger, Damian S; Rønnow, Troels F; Troyer, Matthias

    2015-12-04

    For many optimization algorithms the time to solution depends not only on the problem size but also on the specific problem instance and may vary by many orders of magnitude. It is then necessary to investigate the full distribution and especially its tail. Here, we analyze the distributions of annealing times for simulated annealing and simulated quantum annealing (by path integral quantum Monte Carlo simulation) for random Ising spin glass instances. We find power-law distributions with very heavy tails, corresponding to extremely hard instances, but far broader distributions-and thus worse performance for hard instances-for simulated quantum annealing than for simulated annealing. Fast, nonadiabatic, annealing schedules can improve the performance of simulated quantum annealing for very hard instances by many orders of magnitude.

  5. Clutter discrimination algorithm simulation in pulse laser radar imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule

    2015-10-01

    Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.

  6. Quantitative tomography simulations and reconstruction algorithms

    SciTech Connect

    Martz, H E; Aufderheide, M B; Goodman, D; Schach von Wittenau, A; Logan, C; Hall, J; Jackson, J; Slone, D

    2000-11-01

    X-ray, neutron and proton transmission radiography and computed tomography (CT) are important diagnostic tools that are at the heart of LLNLs effort to meet the goals of the DOE's Advanced Radiography Campaign. This campaign seeks to improve radiographic simulation and analysis so that radiography can be a useful quantitative diagnostic tool for stockpile stewardship. Current radiographic accuracy does not allow satisfactory separation of experimental effects from the true features of an object's tomographically reconstructed image. This can lead to difficult and sometimes incorrect interpretation of the results. By improving our ability to simulate the whole radiographic and CT system, it will be possible to examine the contribution of system components to various experimental effects, with the goal of removing or reducing them. In this project, we are merging this simulation capability with a maximum-likelihood (constrained-conjugate-gradient-CCG) reconstruction technique yielding a physics-based, forward-model image-reconstruction code. In addition, we seek to improve the accuracy of computed tomography from transmission radiographs by studying what physics is needed in the forward model. During FY 2000, an improved version of the LLNL ray-tracing code called HADES has been coupled with a recently developed LLNL CT algorithm known as CCG. The problem of image reconstruction is expressed as a large matrix equation relating a model for the object being reconstructed to its projections (radiographs). Using a constrained-conjugate-gradient search algorithm, a maximum likelihood solution is sought. This search continues until the difference between the input measured radiographs or projections and the simulated or calculated projections is satisfactorily small. We developed a 2D HADES-CCG CT code that uses full ray-tracing simulations from HADES as the projector. Often an object has axial symmetry and it is desirable to reconstruct into a 2D r-z mesh with a limited

  7. Modernizing quantum annealing using local searches

    NASA Astrophysics Data System (ADS)

    Chancellor, Nicholas

    2017-02-01

    I describe how real quantum annealers may be used to perform local (in state space) searches around specified states, rather than the global searches traditionally implemented in the quantum annealing algorithm (QAA). Such protocols will have numerous advantages over simple quantum annealing. By using such searches the effect of problem mis-specification can be reduced, as only energy differences between the searched states will be relevant. The QAA is an analogue of simulated annealing, a classical numerical technique which has now been superseded. Hence, I explore two strategies to use an annealer in a way which takes advantage of modern classical optimization algorithms. Specifically, I show how sequential calls to quantum annealers can be used to construct analogues of population annealing and parallel tempering which use quantum searches as subroutines. The techniques given here can be applied not only to optimization, but also to sampling. I examine the feasibility of these protocols on real devices and note that implementing such protocols should require minimal if any change to the current design of the flux qubit-based annealers by D-Wave Systems Inc. I further provide proof-of-principle numerical experiments based on quantum Monte Carlo that demonstrate simple examples of the discussed techniques.

  8. Retrieval of the pulse amplitude and phase from cross-phase modulation spectrograms using the simulated annealing method.

    PubMed

    Honzatko, Pavel; Kanka, J; Vrany, B

    2004-11-29

    The simulated annealing method is used for retrieving the amplitude and phase from cross-phase modulation spectrograms. The method allows us to take into account the birefringence of the measurement fiber and resolution of the optical spectrum analyzer. The influence of the birefringence and analyzer resolution are discussed.

  9. Population annealing: Theory and application in spin glasses.

    PubMed

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G

    2015-12-01

    Population annealing is an efficient sequential Monte Carlo algorithm for simulating equilibrium states of systems with rough free-energy landscapes. The theory of population annealing is presented, and systematic and statistical errors are discussed. The behavior of the algorithm is studied in the context of large-scale simulations of the three-dimensional Ising spin glass and the performance of the algorithm is compared to parallel tempering. It is found that the two algorithms are similar in efficiency though with different strengths and weaknesses.

  10. Population annealing: Theory and application in spin glasses

    NASA Astrophysics Data System (ADS)

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.

    2015-12-01

    Population annealing is an efficient sequential Monte Carlo algorithm for simulating equilibrium states of systems with rough free-energy landscapes. The theory of population annealing is presented, and systematic and statistical errors are discussed. The behavior of the algorithm is studied in the context of large-scale simulations of the three-dimensional Ising spin glass and the performance of the algorithm is compared to parallel tempering. It is found that the two algorithms are similar in efficiency though with different strengths and weaknesses.

  11. Population Annealing: Theory and Application in Spin Glasses

    NASA Astrophysics Data System (ADS)

    Machta, Jonathan; Wang, Wenlong; Katzgraber, Helmut G.

    Population annealing is an efficient sequential Monte Carlo algorithm for simulating equilibrium states of systems with rough free energy landscapes. The theory of population annealing is presented, and systematic and statistical errors are discussed. The behavior of the algorithm is studied in the context of large-scale simulations of the three-dimensional Ising spin glass and the performance of the algorithm is compared to parallel tempering. It is found that the two algorithms are similar in efficiency though with different strengths and weaknesses. Supported by NSF DMR-1151387, DMR-1208046 and DMR-1507506.

  12. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  13. Numerical assessment for a broadband and tuned noise using hybrid mufflers and a simulated annealing method

    NASA Astrophysics Data System (ADS)

    Chiu, Min-Chie

    2013-06-01

    A broadband noise hybridized with pure tones often occurs in practical engineering work. However, assessments of a muffler's optimal shape design that would simultaneously overcome a broadband noise hybridized with multiple tones within a constrained space were rarely addressed. In order to promote the best acoustical performance in mufflers, five kinds of the hybrid mufflers composed of a reactive unit, a dissipative unit, and Helmholtz resonator (HR) units will be proposed. Moreover, to strengthen the noise elimination at the pure tone, mufflers having parallel multiple-sectioned HRs or having multiple HR connections in series (muffler D and muffler E) will be also presented in the noise abatement. On the basis of the plane wave theory, the four-pole system matrix used to evaluate the acoustic performance of a multi-tone hybrid Helmholtz muffler will be presented. A numerical case for eliminating broadband noise hybridized with a pure tone emitted from a machine room using five kinds of mufflers (muffler A-E) will also be introduced. To find the best acoustical performance of a space-constrained muffler, a numerical assessment using a simulated annealing (SA) method is adopted. To verify the availability of the SA optimization, a numerical optimization of muffler A at a pure tone (280 Hz) is exemplified. Before the SA operation can be carried out, the accuracy of the mathematical model will be checked using the experimental data. The influences of the sound transmission loss (STL) with respect to N1-array HR and the STL with respect to one-array HR sectioned in N2 divisions have also been assessed. Also, the influence of the STL with respect to the design parameters such as the ratio of d1/d2, the diameter of the perforated hole (dH), the porosity (p%) of the perforated plate, and the outer diameter (d2) of the dissipative unit has been analyzed. Consequently, a successful approach in eliminating a broadband noise hybridized with a pure tone using optimally

  14. Constructing Cross-Linked Polymer Networks Using Monte Carlo Simulated Annealing Technique for Atomistic Molecular Simulations

    DTIC Science & Technology

    2014-10-01

    Lang Sui, Kieffer J, Caruso M, Moore J. Combined experimental and simulation study of the cure kinetics of DCPD. Journal of Composite Materials...VanLandingham MR, Lenhart JL, Van Vliet, KJ. Tunable mechanical behavior of synthetic organogels as biofidelic tissue stimulants. Journal of...Biomechanics 2013;46(9):1583-1591. 9. Mrozek RA, Cole PJ, Otim KJ, Shull KR, Lenhart JL. Influence of solvent size on the mechanical properties and rheology

  15. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    PubMed

    Wu, Zujian; Pang, Wei; Coghill, George M

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  16. An interactive system for creating object models from range data based on simulated annealing

    SciTech Connect

    Hoff, W.A.; Hood, F.W.; King, R.H.

    1997-05-01

    In hazardous applications such as remediation of buried waste and dismantlement of radioactive facilities, robots are an attractive solution. Sensing to recognize and locate objects is a critical need for robotic operations in unstructured environments. An accurate 3-D model of objects in the scene is necessary for efficient high level control of robots. Drawing upon concepts from supervisory control, the authors have developed an interactive system for creating object models from range data, based on simulated annealing. Site modeling is a task that is typically performed using purely manual or autonomous techniques, each of which has inherent strengths and weaknesses. However, an interactive modeling system combines the advantages of both manual and autonomous methods, to create a system that has high operator productivity as well as high flexibility and robustness. The system is unique in that it can work with very sparse range data, tolerate occlusions, and tolerate cluttered scenes. The authors have performed an informal evaluation with four operators on 16 different scenes, and have shown that the interactive system is superior to either manual or automatic methods in terms of task time and accuracy.

  17. Improvement of automatic scaling of vertical incidence ionograms by simulated annealing

    NASA Astrophysics Data System (ADS)

    Jiang, Chunhua; Yang, Guobin; Lan, Ting; Zhu, Peng; Song, Huan; Zhou, Chen; Cui, Xiao; Zhao, Zhengyu; Zhang, Yuannong

    2015-10-01

    The ionogram autoscaling technique is very important for facilitating the statistical investigation of the ionosphere. Jiang et al. (2013) proposed an autoscaling technique for extracting ionospheric characteristics from vertical incidence ionograms. However, extensive efforts are invested in continuously improving the performance of that. The simulated annealing (SA) is used to improve the autoscaling technique in this paper. To be capable of automatic scaling of ionograms recorded at different locations, the SA is applied instead of Empirical Orthogonal Functions (EOFs) to search the best-fit parameters in the autoscaling technique. In order to validate the improvement of this autoscaling technique, ionograms recorded at Wuhan (30.5°N, 114.3°E), Puer (22.7°N, 101.05°E) and Leshan (29.6°N, 103.75°E) are investigated by comparing the autoscaled results with the values scaled by an operator. Results show that the presented work is efficient for scaling of ionograms recorded at different geographic positions. Moreover, the additional procedure can improve the accuracy of the autoscaling technique compared to results presented by Jiang et al. (2013).

  18. Equilibrium properties of transition-metal ion-argon clusters via simulated annealing

    NASA Technical Reports Server (NTRS)

    Asher, Robert L.; Micha, David A.; Brucat, Philip J.

    1992-01-01

    The geometrical structures of M(+) (Ar)n ions, with n = 1-14, have been studied by the minimization of a many-body potential surface with a simulated annealing procedure. The minimization method is justified for finite systems through the use of an information theory approach. It is carried out for eight potential-energy surfaces constructed with two- and three-body terms parametrized from experimental data and ab initio results. The potentials should be representative of clusters of argon atoms with first-row transition-metal monocations of varying size. The calculated geometries for M(+) = Co(+) and V(+) possess radial shells with small (ca. 4-8) first-shell coordination number. The inclusion of an ion-induced-dipole-ion-induced-dipole interaction between argon atoms raises the energy and generally lowers the symmetry of the cluster by promoting incomplete shell closure. Rotational constants as well as electric dipole and quadrupole moments are quoted for the Co(+) (Ar)n and V(+) (Ar)n predicted structures.

  19. Parallel helix bundles and ion channels: molecular modeling via simulated annealing and restrained molecular dynamics.

    PubMed Central

    Kerr, I D; Sankararamakrishnan, R; Smart, O S; Sansom, M S

    1994-01-01

    A parallel bundle of transmembrane (TM) alpha-helices surrounding a central pore is present in several classes of ion channel, including the nicotinic acetylcholine receptor (nAChR). We have modeled bundles of hydrophobic and of amphipathic helices using simulated annealing via restrained molecular dynamics. Bundles of Ala20 helices, with N = 4, 5, or 6 helices/bundle were generated. For all three N values the helices formed left-handed coiled coils, with pitches ranging from 160 A (N = 4) to 240 A (N = 6). Pore radius profiles revealed constrictions at residues 3, 6, 10, 13, and 17. A left-handed coiled coil and a similar pattern of pore constrictions were observed for N = 5 bundles of Leu20. In contrast, N = 5 bundles of Ile20 formed right-handed coiled coils, reflecting loosened packing of helices containing beta-branched side chains. Bundles formed by each of two classes of amphipathic helices were examined: (a) M2a, M2b, and M2c derived from sequences of M2 helices of nAChR; and (b) (LSSLLSL)3, a synthetic channel-forming peptide. Both classes of amphipathic helix formed left-handed coiled coils. For (LSSLLSL)3 the pitch of the coil increased as N increased from 4 to 6. The M2c N = 5 helix bundle is discussed in the context of possible models of the pore domain of nAChR. Images FIGURE 1 FIGURE 3 PMID:7529585

  20. Molecular dynamics simulations of solid state recrystallization I: Observation of grain growth in annealed iron nanoparticles

    SciTech Connect

    Huang Jinfan; Bartell, Lawrence S.

    2012-01-15

    Molecular dynamics simulations of solid state recrystallization and grain growth in iron nanoparticles containing 1436 atoms were carried out. During the period of relaxation of supercooled liquid drops and during thermal annealing of the solids they froze to, changes in disorder were followed by monitoring changes in energy and the migration of grain boundaries. All 27 polycrystalline nanoparticles, which were generated with different grain boundaries, were observed to recystallize into single crystals during annealing. Larger grains consumed the smaller ones. In particular, two sets of solid particles, designated as A and B, each with two grains, were treated to generate 18 members of each set with different thermal histories. This provided small ensembles (of 18 members each) from which rates at which the larger grain engulfed the smaller one, could be determined. The rate was higher, the smaller the degree of misorientation between the grains, a result contrary to the general rule based on published experiments, but the reason was clear. Crystal A, which happened to have a somewhat lower angle of misorientation, also had a higher population of defects, as confirmed by its higher energy. Accordingly, its driving force to recrystallize was greater. Although the mechanism of recrystallization is commonly called nucleation, our results, which probe the system on an atomic scale, were not able to identify nuclei unequivocally. By contrast, our technique can and does reveal nuclei in the freezing of liquids and in transformations from one solid phase to another. An alternative rationale for a nucleation-like process in our results is proposed. - Graphical Abstract: Time dependence of energy per atom in the quenching of liquid nanoparticles A-C of iron. Nanoparticle C freezes directly into a single crystal but A and B freeze to solids with two grains. A and B eventually recrystallize into single crystals. Highlights: Black-Right-Pointing-Pointer Solid state material

  1. Bio-inspired algorithms applied to molecular docking simulations.

    PubMed

    Heberlé, G; de Azevedo, W F

    2011-01-01

    Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.

  2. A simulation algorithm for ultrasound liver backscattered signals.

    PubMed

    Zatari, D; Botros, N; Dunn, F

    1995-11-01

    In this study, we present a simulation algorithm for the backscattered ultrasound signal from liver tissue. The algorithm simulates backscattered signals from normal liver and three different liver abnormalities. The performance of the algorithm has been tested by statistically comparing the simulated signals with corresponding signals obtained from a previous in vivo study. To verify that the simulated signals can be classified correctly we have applied a classification technique based on an artificial neural network. The acoustic features extracted from the spectrum over a 2.5 MHz bandwidth are the attenuation coefficient and the change of speed of sound with frequency (dispersion). Our results show that the algorithm performs satisfactorily. Further testing of the algorithm is conducted by the use of a data acquisition and analysis system designed by the authors, where several simulated signals are stored in memory chips and classified according to their abnormalities.

  3. Prediction of crystal structures from crystal chemistry rules by simulated annealing

    NASA Astrophysics Data System (ADS)

    Pannetier, J.; Bassas-Alsina, J.; Rodriguez-Carvajal, J.; Caignaert, V.

    1990-07-01

    THE prediction of the structure of inorganic crystalline solids from the knowledge of their chemical composition is still a largely unresolved problem1-3. The usual approach to this problem is to minimize, for a selection of candidate models, the potential energy of the system with respect to the structural parameters of these models: the solution is the arrangement that comes out lowest in energy. Methods using this procedure may differ in the origin (ab initio or empirical) of the interatomic potentials used, but they usually restrict themselves to optimizing a structural arrangement within the constraints of given symmetry and bond topology. As a result, they do not truly address the problem of predicting the unknown structure of a real compound. The method we describe here is an attempt at solving the following problem: given the chemical composition of a crystalline compound and the values of its unit-cell parameters, find its structure (topology and bond distances) by optimizing the arrangement of ions, atoms or molecules in accordance with a set of prescribed rules. The procedure uses simple, empirical crystal chemistry arguments (Pauling's principles for ionic compounds4) and a powerful stochastic search procedure, known as simulated annealing5 to identify the best atomic model or models. We discuss the potential of the method for structure determination and refinement, using results obtained for several known inorganic structures, and by the determination of a previously unknown structure. Although the approach is limited to the case of inorganic compounds, it is nevertheless very general, and would apply to any crystalline structure provided that the principles governing the architecture of the solid can be properly described.

  4. Automated integration of genomic physical mapping data via parallel simulated annealing

    SciTech Connect

    Slezak, T.

    1994-06-01

    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  5. Performance of a parallel algorithm for standard cell placement on the Intel Hypercube

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1987-01-01

    A parallel simulated annealing algorithm for standard cell placement on the Intel Hypercube is presented. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than uniprocessor simulated annealing algorithms.

  6. A splitting algorithm for Vlasov simulation with filamentation filtration

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Farrell, W. M.

    1994-01-01

    A Fourier-Fourier transformed version of the splitting algorithm for simulating solutions of the Vlasov-Poisson system of equations is introduced. It is shown that with the inclusion of filamentation filtration in this transformed algorithm it is both faster and more stable than the standard splitting algorithm. It is further shown that in a scalar computer environment this new algorithm is approximately equal in speed and far less noisy than its particle-in-cell counterpart. It is conjectured that in a multiprocessor environment the filtered splitting algorithm would be faster while producing more precise results.

  7. Fast simulated annealing and adaptive Monte Carlo sampling based parameter optimization for dense optical-flow deformable image registration of 4DCT lung anatomy

    NASA Astrophysics Data System (ADS)

    Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.

    2016-03-01

    Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.

  8. AlCoCrCuFeNi high entropy alloy cluster growth and annealing on silicon: A classical molecular dynamics simulation study

    NASA Astrophysics Data System (ADS)

    Xie, Lu; Brault, Pascal; Thomann, Anne-Lise; Bauchire, Jean-Marc

    2013-11-01

    Molecular dynamics simulations are carried out for describing deposition and annealing processes of AlCoCrCuFeNi high entropy alloy (HEA) thin films. Deposition results in the growth of HEA clusters. Further annealing between 300 K and 1500 K leads to a coalescence phenomenon, as described by successive jump in the root mean square displacement of atoms. The simulated X-ray diffraction patterns during annealing reproduces the main feature of the experiments: a phase transition of the cluster structure from bcc to fcc.

  9. Adaptive Mesh and Algorithm Refinement Using Direct Simulation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Garcia, Alejandro L.; Bell, John B.; Crutchfield, William Y.; Alder, Berni J.

    1999-09-01

    Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.

  10. Improved ant colony algorithm and its simulation study

    NASA Astrophysics Data System (ADS)

    Wang, Zongjiang

    2013-03-01

    Ant colony algorithm is development a new heuristic algorithm through simulation ant foraging. For its convergence rate slow, easy to fall into local optimal solution proposed for the adjustment of key parameters, pheromone update to improve the way and through the issue of TSP experiments, results showed that the improved algorithm has better overall search capabilities and demonstrated the feasibility and effectiveness of this method.

  11. Duality quantum algorithm efficiently simulates open quantum systems

    NASA Astrophysics Data System (ADS)

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-07-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm.

  12. Duality quantum algorithm efficiently simulates open quantum systems.

    PubMed

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-07-28

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d(3)) in contrast to O(d(4)) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm.

  13. LAWS simulation: Sampling strategies and wind computation algorithms

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D. A.; Wood, S. A.; Houston, S. H.

    1989-01-01

    In general, work has continued on developing and evaluating algorithms designed to manage the Laser Atmospheric Wind Sounder (LAWS) lidar pulses and to compute the horizontal wind vectors from the line-of-sight (LOS) measurements. These efforts fall into three categories: Improvements to the shot management and multi-pair algorithms (SMA/MPA); observing system simulation experiments; and ground-based simulations of LAWS.

  14. Multiple Simulated Annealing-Molecular Dynamics (MSA-MD) for Conformational Space Search of Peptide and Miniprotein.

    PubMed

    Hao, Ge-Fei; Xu, Wei-Fang; Yang, Sheng-Gang; Yang, Guang-Fu

    2015-10-23

    Protein and peptide structure predictions are of paramount importance for understanding their functions, as well as the interactions with other molecules. However, the use of molecular simulation techniques to directly predict the peptide structure from the primary amino acid sequence is always hindered by the rough topology of the conformational space and the limited simulation time scale. We developed here a new strategy, named Multiple Simulated Annealing-Molecular Dynamics (MSA-MD) to identify the native states of a peptide and miniprotein. A cluster of near native structures could be obtained by using the MSA-MD method, which turned out to be significantly more efficient in reaching the native structure compared to continuous MD and conventional SA-MD simulation.

  15. Automated medial axis seeding and guided evolutionary simulated annealing for optimization of gamma knife radiosurgery treatment plans

    NASA Astrophysics Data System (ADS)

    Zhang, Pengpeng

    The Leksell Gamma KnifeRTM (LGK) is a tool for providing accurate stereotactic radiosurgical treatment of brain lesions, especially tumors. Currently, the treatment planning team "forward" plans radiation treatment parameters while viewing a series of 2D MR scans. This primarily manual process is cumbersome and time consuming because the difficulty in visualizing the large search space for the radiation parameters (i.e., shot overlap, number, location, size, and weight). I hypothesize that a computer-aided "inverse" planning procedure that utilizes tumor geometry and treatment goals could significantly improve the planning process and therapeutic outcome of LGK radiosurgery. My basic observation is that the treatment team is best at identification of the location of the lesion and prescribing a lethal, yet safe, radiation dose. The treatment planning computer is best at determining both the 3D tumor geometry and optimal LGK shot parameters necessary to deliver a desirable dose pattern to the tumor while sparing adjacent normal tissue. My treatment planning procedure asks the neurosurgeon to identify the tumor and critical structures in MR images and the oncologist to prescribe a tumoricidal radiation dose. Computer-assistance begins with geometric modeling of the 3D tumor's medial axis properties. This begins with a new algorithm, a Gradient-Phase Plot (G-P Plot) decomposition of the tumor object's medial axis. I have found that medial axis seeding, while insufficient in most cases to produce an acceptable treatment plan, greatly reduces the solution space for Guided Evolutionary Simulated Annealing (GESA) treatment plan optimization by specifying an initial estimate for shot number, size, and location, but not weight. They are used to generate multiple initial plans which become initial seed plans for GESA. The shot location and weight parameters evolve and compete in the GESA procedure. The GESA objective function optimizes tumor irradiation (i.e., as close to

  16. Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Gatski, Thomas B.

    1997-01-01

    A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.

  17. Multidiscontinuity algorithm for world-line Monte Carlo simulations.

    PubMed

    Kato, Yasuyuki

    2013-01-01

    We introduce a multidiscontinuity algorithm for the efficient global update of world-line configurations in Monte Carlo simulations of interacting quantum systems. This algorithm is a generalization of the two-discontinuity algorithms introduced in Refs. [N. Prokof'ev, B. Svistunov, and I. Tupitsyn, Phys. Lett. A 238, 253 (1998)] and [O. F. Syljuåsen and A. W. Sandvik, Phys. Rev. E 66, 046701 (2002)]. This generalization is particularly effective for studying Bose-Einstein condensates (BECs) of composite particles. In particular, we demonstrate the utility of the generalized algorithm by simulating a Hamiltonian for an S=1 antiferromagnet with strong uniaxial single-ion anisotropy. The multidiscontinuity algorithm not only solves the freezing problem that arises in this limit, but also allows the efficient computing of the off-diagonal correlator that characterizes a BEC of composite particles.

  18. Extrapolated gradientlike algorithms for molecular dynamics and celestial mechanics simulations.

    PubMed

    Omelyan, I P

    2006-09-01

    A class of symplectic algorithms is introduced to integrate the equations of motion in many-body systems. The algorithms are derived on the basis of an advanced gradientlike decomposition approach. Its main advantage over the standard gradient scheme is the avoidance of time-consuming evaluations of force gradients by force extrapolation without any loss of precision. As a result, the efficiency of the integration improves significantly. The algorithms obtained are analyzed and optimized using an error-function theory. The best among them are tested in actual molecular dynamics and celestial mechanics simulations for comparison with well-known nongradient and gradient algorithms such as the Störmer-Verlet, Runge-Kutta, Cowell-Numerov, Forest-Ruth, Suzuki-Chin, and others. It is demonstrated that for moderate and high accuracy, the extrapolated algorithms should be considered as the most efficient for the integration of motion in molecular dynamics simulations.

  19. A Coulomb collision algorithm for weighted particle simulations

    NASA Technical Reports Server (NTRS)

    Miller, Ronald H.; Combi, Michael R.

    1994-01-01

    A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.

  20. Improved mapping of the travelling salesman problem for quantum annealing

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias; Heim, Bettina; Brown, Ethan; Wecker, David

    2015-03-01

    We consider the quantum adiabatic algorithm as applied to the travelling salesman problem (TSP). We introduce a novel mapping of TSP to an Ising spin glass Hamiltonian and compare it to previous known mappings. Through direct perturbative analysis, unitary evolution, and simulated quantum annealing, we show this new mapping to be significantly superior. We discuss how this advantage can translate to actual physical implementations of TSP on quantum annealers.

  1. Fully explicit algorithms for fluid simulation

    NASA Astrophysics Data System (ADS)

    Clausen, Jonathan

    2011-11-01

    Computing hardware is trending towards distributed, massively parallel architectures in order to achieve high computational throughput. For example, Intrepid at Argonne uses 163,840 cores, and next generation machines, such as Sequoia at Lawrence Livermore, will use over one million cores. Harnessing the increasingly parallel nature of computational resources will require algorithms that scale efficiently on these architectures. The advent of GPU-based computation will serve to accelerate this behavior, as a single GPU contains hundreds of processor ``cores.'' Explicit algorithms avoid the communication associated with a linear solve, thus parallel scalability of these algorithms is typically high. This work will explore the efficiency and accuracy of three explicit solution methodologies for the Navier-Stokes equations: traditional artificial compressibility schemes, the lattice-Boltzmann method, and the recently proposed kinetically reduced local Navier-Stokes equations [Borok, Ansumali, and Karlin (2007)]. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. Domain splitting algorithms for the Li-ion battery simulation

    NASA Astrophysics Data System (ADS)

    Iliev, O.; Zakharov, P. E.

    2016-11-01

    Numerical simulation of electrochemical processes in rechargeable batteries has important applications in an energy technology. In this paper we have developed and compared three domain splitting algorithms for the Li-ion battery simulation. Li-ion battery simulation is based on microscopic model, which contains nonlinear equations for Li-ion concentration and potential. On the interface of electrodes and electrolyte the Lithium ions intercalation are described by nonlinear equation. This nonlinear interface condition affects the Newton's method iterations and computation time. To simplify numerical simulations we use domain splitting algorithms, which split the original problem into three independent subproblems in two electrodes and electrolyte. We investigate the numerical convergence and efficiency of the algorithms on a 2D model problem.

  3. Simulations of optical autofocus algorithms based on PGA in SAIL

    NASA Astrophysics Data System (ADS)

    Xu, Nan; Liu, Liren; Xu, Qian; Zhou, Yu; Sun, Jianfeng

    2011-09-01

    The phase perturbations due to propagation effects can destroy the high resolution imagery of Synthetic Aperture Imaging Ladar (SAIL). Some autofocus algorithms for Synthetic Aperture Radar (SAR) were developed and implemented. Phase Gradient Algorithm (PGA) is a well-known one for its robustness and wide application, and Phase Curvature Algorithm (PCA) as a similar algorithm expands its applied field to strip map mode. In this paper the autofocus algorithms utilized in optical frequency domain are proposed, including optical PGA and PCA respectively implemented in spotlight and strip map mode. Firstly, the mathematical flows of optical PGA and PCA in SAIL are derived. The simulations model of the airborne SAIL is established, and the compensation simulations of the synthetic aperture laser images corrupted by the random errors, linear phase errors and quadratic phase errors are executed. The compensation effect and the cycle index of the simulation are discussed. The simulation results show that both the two optical autofocus algorithms are effective while the optical PGA outperforms the optical PCA, which keeps consistency with the theory.

  4. Daylighting simulation: methods, algorithms, and resources

    SciTech Connect

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of

  5. Solution structures of the melanocyte-stimulating hormones by two-dimensional NMR spectroscopy and dynamical simulated-annealing calculations.

    PubMed

    Lee, J H; Lim, S K; Huh, S H; Lee, D; Lee, W

    1998-10-01

    Melanocortins, which are involved in melanocyte pigmentation control and glucocorticoid stimulation, have functional roles in various physiological mechanisms and have been shown to participate in higher cortical functions. Recently, it has also been reported that melanocyte-stimulating hormone (MSH) and melanocortin 4 receptor (MC4R) are the key components of the hypothalamic response to obesity. The solution structures of both melanocyte-stimulating hormone alpha-MSH (Ac-Ser-Tyr-Ser-Met-Glu-His-Phe-Arg-Trp-Gly-Lys-Pro-Val-NH2) and its analog alpha-MSH-ND (Ac-Ahx-Asp-His-DPhe-Arg-Trp-Lys-NH2) (Ahx, 2-aminohexanoic acid) have been determined by two-dimensional NMR spectroscopy and simulated-annealing calculations. The NMR data revealed that alpha-MSH forms a hairpin loop conformation which includes conserved message sequences, whereas alpha-MSH-ND prefers a type I beta-turn comprising residues of Asp2-His3-DPhe4-Arg5. Final simulated-annealing structures of both alpha-MSH-ND and alpha-MSH peptides converged with rmsd of 0.07 nm for alpha-MSH-ND and 0.1 nm for alpha-MSH between backbone atoms, respectively. This result will provide the structural bases of melanocortin functions as well as valuable information for structure-based drug design involving the regulation of obesity and feeding.

  6. 1-Dimensional simulation of thermal annealing in a commercial nuclear power plant reactor pressure vessel wall section

    SciTech Connect

    Nakos, J.T.; Rosinski, S.T.; Acton, R.U.

    1994-11-01

    The objective of this work was to provide experimental heat transfer boundary condition and reactor pressure vessel (RPV) section thermal response data that can be used to benchmark computer codes that simulate thermal annealing of RPVS. This specific protect was designed to provide the Electric Power Research Institute (EPRI) with experimental data that could be used to support the development of a thermal annealing model. A secondary benefit is to provide additional experimental data (e.g., thermal response of concrete reactor cavity wall) that could be of use in an annealing demonstration project. The setup comprised a heater assembly, a 1.2 in {times} 1.2 m {times} 17.1 cm thick [4 ft {times} 4 ft {times} 6.75 in] section of an RPV (A533B ferritic steel with stainless steel cladding), a mockup of the {open_quotes}mirror{close_quotes} insulation between the RPV and the concrete reactor cavity wall, and a 25.4 cm [10 in] thick concrete wall, 2.1 in {times} 2.1 in [10 ft {times} 10 ft] square. Experiments were performed at temperature heat-up/cooldown rates of 7, 14, and 28{degrees}C/hr [12.5, 25, and 50{degrees}F/hr] as measured on the heated face. A peak temperature of 454{degrees}C [850{degrees}F] was maintained on the heated face until the concrete wall temperature reached equilibrium. Results are most representative of those RPV locations where the heat transfer would be 1-dimensional. Temperature was measured at multiple locations on the heated and unheated faces of the RPV section and the concrete wall. Incident heat flux was measured on the heated face, and absorbed heat flux estimates were generated from temperature measurements and an inverse heat conduction code. Through-wall temperature differences, concrete wall temperature response, heat flux absorbed into the RPV surface and incident on the surface are presented. All of these data are useful to modelers developing codes to simulate RPV annealing.

  7. Simulation of the Galileo spacecraft axial - Delta-V algorithm

    NASA Technical Reports Server (NTRS)

    Longuski, J. M.

    1983-01-01

    Preliminary results are presented from the analysis of the Galileo spacecraft axial delta-V algorithm. The Galileo spacecraft is a dual spin interplanetary spacecraft which will study the four Galilean moons of Jupiter as well as the Jovian environment and atmosphere. In order to achieve orbit about Jupiter and accurately deliver the probe to the planet's upper atmosphere, the Galileo spacecraft must be capable of performing many trajectory corrections or delta-V maneuvers. Twelve 10 Newton thrusters and one 400 Newton engine are utilized for this purpose. There are many maneuver modes and control algorithms available to the spacecraft. In this paper only the analysis of the axial delta-V algorithm will be discussed. The analysis consists of two parts: an analytic study and a simulation study. The analytic results are based on rigid body dynamics, while the simulation includes the first order effect of the flexible magnetometer boom and nutation damper. The simulation utilizes a program developed at JPL which allows flexible body effects to be simulated by modeling a collection of rigid bodies attached together by hinges, springs and dampers. In this preliminary study of the Galileo only two rigid bodies were used in the simulation, but many more can and will be used in the final tests. In this analysis, the algorithm appears to be working correctly and the analytic and simulation results agree very well.

  8. X-ray simulation algorithms used in ISP

    SciTech Connect

    Sullivan, John P.

    2016-07-29

    ISP is a simulation code which is sometimes used in the USNDS program. ISP is maintained by Sandia National Lab. However, the X-ray simulation algorithm used by ISP was written by scientists at LANL – mainly by Ed Fenimore with some contributions from John Sullivan and George Neuschaefer and probably others. In email to John Sullivan on July 25, 2016, Jill Rivera, ISP project lead, said “ISP uses the function xdosemeters_sim from the xgen library.” The is a fortran subroutine which is also used to simulate the X-ray response in consim (a descendant of xgen). Therefore, no separate documentation of the X-ray simulation algorithms in ISP have been written – the documentation for the consim simulation can be used.

  9. Phase-field simulations of intragranular fission gas bubble evolution in UO2 under post-irradiation thermal annealing

    SciTech Connect

    Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert O.; Gao, Fei; Sun, Xin

    2013-05-15

    Fission gas bubble is one of evolving microstructures, which affect thermal mechanical properties such as thermo-conductivity, gas release, volume swelling, and cracking, in operating nuclear fuels. Therefore, fundamental understanding of gas bubble evolution kinetics is essential to predict the thermodynamic property and performance changes of fuels. In this work, a generic phasefield model was developed to describe the evolution kinetics of intra-granular fission gas bubbles in UO2 fuels under post-irradiation thermal annealing conditions. Free energy functional and model parameters are evaluated from atomistic simulations and experiments. Critical nuclei size of the gas bubble and gas bubble evolution were simulated. A linear relationship between logarithmic bubble number density and logarithmic mean bubble diameter is predicted which is in a good agreement with experimental data.

  10. Comparative testing of DNA segmentation algorithms using benchmark simulations.

    PubMed

    Elhaik, Eran; Graur, Dan; Josic, Kresimir

    2010-05-01

    Numerous segmentation methods for the detection of compositionally homogeneous domains within genomic sequences have been proposed. Unfortunately, these methods yield inconsistent results. Here, we present a benchmark consisting of two sets of simulated genomic sequences for testing the performances of segmentation algorithms. Sequences in the first set are composed of fixed-sized homogeneous domains, distinct in their between-domain guanine and cytosine (GC) content variability. The sequences in the second set are composed of a mosaic of many short domains and a few long ones, distinguished by sharp GC content boundaries between neighboring domains. We use these sets to test the performance of seven segmentation algorithms in the literature. Our results show that recursive segmentation algorithms based on the Jensen-Shannon divergence outperform all other algorithms. However, even these algorithms perform poorly in certain instances because of the arbitrary choice of a segmentation-stopping criterion.

  11. Concluding Report: Quantitative Tomography Simulations and Reconstruction Algorithms

    SciTech Connect

    Aufderheide, M B; Martz, H E; Slone, D M; Jackson, J A; Schach von Wittenau, A E; Goodman, D M; Logan, C M; Hall, J M

    2002-02-01

    In this report we describe the original goals and final achievements of this Laboratory Directed Research and Development project. The Quantitative was Tomography Simulations and Reconstruction Algorithms project (99-ERD-015) funded as a multi-directorate, three-year effort to advance the state of the art in radiographic simulation and tomographic reconstruction by improving simulation and including this simulation in the tomographic reconstruction process. Goals were to improve the accuracy of radiographic simulation, and to couple advanced radiographic simulation tools with a robust, many-variable optimization algorithm. In this project, we were able to demonstrate accuracy in X-Ray simulation at the 2% level, which is an improvement of roughly a factor of 5 in accuracy, and we have successfully coupled our simulation tools with the CCG (Constrained Conjugate Gradient) optimization algorithm, allowing reconstructions that include spectral effects and blurring in the reconstructions. Another result of the project was the assembly of a low-scatter X-Ray imaging facility for use in nondestructive evaluation applications. We conclude with a discussion of future work.

  12. The multinomial simulation algorithm for discrete stochastic simulation of reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Lampoudi, Sotiria; Gillespie, Dan T.; Petzold, Linda R.

    2009-03-01

    The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.

  13. Method for estimation of refractive index and size distribution of aerosol using direct and diffuse solar irradiance and aureole by means of simulated annealing

    NASA Astrophysics Data System (ADS)

    Arai, Kohei; Liang, XingMing

    2003-12-01

    An estimation method for refractive index and size distribution of aerosols using measurements of direct and diffuse solar irradiance as well as the solar aureole by means of a modified simulated annealing is proposed. The proposed method is based on simulated annealing modified to acceralate a learning process by using a gradually decreasing oscillation function of temperature of annealing. By a using Gauss Seidel based atmospheric code, simulation data of direct and diffuse solar irradiance are generated together with estimated aureole measurements by means of an empirical method derived from experimental data. A comparison between the existing method proposed by P Romanov et.al. based on a linear inversion method and the proposed method is made. The results show double improvement of the estimation accuracy for both the aerosol size distribution and refractive index.

  14. Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"

    SciTech Connect

    Petzold, Linda R.

    2012-10-25

    Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.

  15. Algorithms for Model Calibration of Ground Water Simulators

    DTIC Science & Technology

    2014-11-20

    cobian, and Jacobian-vector products are computed with a Monte Carlo simulation. This situation differs from the textbook case [5] in that one does not...Anderson acceleration is a natural method for multi-physics coupling (for example subsurface flow, chemistry , and heat transfer) when the individual physics...Online publication 7/12/2014. [11] J. Nance and C. T. Kelley, A sparse interpolation algorithm for dynamical simulations in compu- tational chemistry

  16. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over

  17. Fast stochastic algorithm for simulating evolutionary population dynamics

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  18. Evidence for quantum annealing with more than one hundred qubits

    NASA Astrophysics Data System (ADS)

    Boixo, Sergio; Rønnow, Troels F.; Isakov, Sergei V.; Wang, Zhihui; Wecker, David; Lidar, Daniel A.; Martinis, John M.; Troyer, Matthias

    2014-03-01

    Quantum technology is maturing to the point where quantum devices, such as quantum communication systems, quantum random number generators and quantum simulators may be built with capabilities exceeding classical computers. A quantum annealer, in particular, solves optimization problems by evolving a known initial configuration at non-zero temperature towards the ground state of a Hamiltonian encoding a given problem. Here, we present results from tests on a 108 qubit D-Wave One device based on superconducting flux qubits. By studying correlations we find that the device performance is inconsistent with classical annealing or that it is governed by classical spin dynamics. In contrast, we find that the device correlates well with simulated quantum annealing. We find further evidence for quantum annealing in the form of small-gap avoided level crossings characterizing the hard problems. To assess the computational power of the device we compare it against optimized classical algorithms.

  19. Exploring photometric redshifts as an optimization problem: an ensemble MCMC and simulated annealing-driven template-fitting approach

    NASA Astrophysics Data System (ADS)

    Speagle, Joshua S.; Capak, Peter L.; Eisenstein, Daniel J.; Masters, Daniel C.; Steinhardt, Charles L.

    2016-10-01

    Using a 4D grid of ˜2 million model parameters (Δz = 0.005) adapted from Cosmological Origins Survey photometric redshift (photo-z) searches, we investigate the general properties of template-based photo-z likelihood surfaces. We find these surfaces are filled with numerous local minima and large degeneracies that generally confound simplistic gradient-descent optimization schemes. We combine ensemble Markov Chain Monte Carlo sampling with simulated annealing to robustly and efficiently explore these surfaces in approximately constant time. Using a mock catalogue of 384 662 objects, we show our approach samples ˜40 times more efficiently compared to a `brute-force' counterpart while maintaining similar levels of accuracy. Our results represent first steps towards designing template-fitting photo-z approaches limited mainly by memory constraints rather than computation time.

  20. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  1. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  2. Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.

    1997-01-01

    The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.

  3. A robust hybrid fuzzy-simulated annealing-intelligent water drops approach for tuning a distribution static compensator nonlinear controller in a distribution system

    NASA Astrophysics Data System (ADS)

    Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza

    2016-06-01

    This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.

  4. Anisotropy evolution of nanoparticles under annealing: Benefits of isothermal remanent magnetization simulation

    NASA Astrophysics Data System (ADS)

    Tournus, Florent; Tamion, Alexandre; Hillion, Arnaud; Dupuis, Véronique

    2016-12-01

    Isothermal remanent magnetization (IRM) combined with Direct current demagnetization (DcD) are powerful tools to qualitatively study the interactions (through the Δm parameter) between magnetic particles in a granular media. For magnetic nanoparticles diluted in a matrix, it is possible to reach a regime where Δm is equal to zero, i.e. where interparticle interactions are negligible: one can then infer the intrinsic properties of nanoparticles through measurements on an assembly, which are analyzed by a combined fit procedure (based on the Stoner-Wohlfarth and Néel models). Here we illustrate the benefits of a quantitative analysis of IRM curves, for Co nanoparticles embedded in amorphous carbon (before and after annealing): while a large anisotropy increase may have been deduced from the other measurements, IRM curves provide an improved characterization of the nanomagnets intrinsic properties, revealing that it is in fact not the case. This shows that IRM curves, which only probe the irreversible switching of nanomagnets, are complementary to widely used low field susceptibility curves.

  5. An Evaluation of a Modified Simulated Annealing Algorithm for Various Formulations

    DTIC Science & Technology

    1990-08-01

    multiconstraint zero - one knapsack problem , Computing, 40, 1- 8. Dreyfus, Stuart E., & Law, Averili M. (1977). The art and theory...Formulations 1. Zero - One Programming Drexl (1988) formulates a 0-1 multiconstraint knapsack problem as: maximize z = I c x J-1 subject to I aijxj b1...pruned using some approach tailored to the problem being considered. The most common methods are forms of dynamic programming and branch -and- bound

  6. Homology modeling using simulated annealing of restrained molecular dynamics and conformational search calculations with CONGEN: application in predicting the three-dimensional structure of murine homeodomain Msx-1.

    PubMed Central

    Li, H.; Tejero, R.; Monleon, D.; Bassolino-Klimas, D.; Abate-Shen, C.; Bruccoleri, R. E.; Montelione, G. T.

    1997-01-01

    We have developed an automatic approach for homology modeling using restrained molecular dynamics and simulated annealing procedures, together with conformational search algorithms available in the molecular mechanics program CONGEN (Bruccoleri RE, Karplus M, 1987, Biopolymers 26:137-168). The accuracy of the method is validated by "predicting" structures of two homeodomain proteins with known three-dimensional structures, and then applied to predict the three-dimensional structure of the homeodomain of the murine Msx-1 transcription factor. Regions of the unknown protein structure that are highly homologous to the known template structure are constrained by "homology distance constraints," whereas the conformations of nonhomologous regions of the unknown protein are defined only by the potential energy function. A full energy function (excluding explicit solvent) is employed to ensure that the calculated structures have good conformational energies and are physically reasonable. As in NMR structure determinations, information on the consistency of the structure prediction is obtained by superposition of the resulting family of protein structures. In this paper, our homology modeling algorithm is described and compared with related homology modeling methods using spatial constraints derived from the structures of homologous proteins. The software is then used to predict the DNA-bound structures of three homeodomain proteins from the X-ray crystal structure of the engrailed homeodomain protein (Kissinger CR et al., 1990, Cell 63:579-590). The resulting backbone and side-chain conformations of the modeled yeast Mat alpha 2 and D. melanogaster Antennapedia homeodomains are excellent matches to the corresponding published X-ray crystal (Wolberger C et al., 1991, Cell 67:517-528) and NMR (Billeter M et al., 1993, J Mol Biol 234:1084-1097) structures, respectively. Examination of these structures of Msx-1 reveals a network of highly conserved surface salt bridges that

  7. Anatomy-Based Inverse Planning Simulated Annealing Optimization in High-Dose-Rate Prostate Brachytherapy: Significant Dosimetric Advantage Over Other Optimization Techniques

    SciTech Connect

    Jacob, Dayee Raben, Adam; Sarkar, Abhirup; Grimm, Jimm; Simpson, Larry

    2008-11-01

    Purpose: To perform an independent validation of an anatomy-based inverse planning simulated annealing (IPSA) algorithm in obtaining superior target coverage and reducing the dose to the organs at risk. Method and Materials: In a recent prostate high-dose-rate brachytherapy protocol study by the Radiation Therapy Oncology Group (0321), our institution treated 20 patients between June 1, 2005 and November 30, 2006. These patients had received a high-dose-rate boost dose of 19 Gy to the prostate, in addition to an external beam radiotherapy dose of 45 Gy with intensity-modulated radiotherapy. Three-dimensional dosimetry was obtained for the following optimization schemes in the Plato Brachytherapy Planning System, version 14.3.2, using the same dose constraints for all the patients treated during this period: anatomy-based IPSA optimization, geometric optimization, and dose point optimization. Dose-volume histograms were generated for the planning target volume and organs at risk for each optimization method, from which the volume receiving at least 75% of the dose (V{sub 75%}) for the rectum and bladder, volume receiving at least 125% of the dose (V{sub 125%}) for the urethra, and total volume receiving the reference dose (V{sub 100%}) and volume receiving 150% of the dose (V{sub 150%}) for the planning target volume were determined. The dose homogeneity index and conformal index for the planning target volume for each optimization technique were compared. Results: Despite suboptimal needle position in some implants, the IPSA algorithm was able to comply with the tight Radiation Therapy Oncology Group dose constraints for 90% of the patients in this study. In contrast, the compliance was only 30% for dose point optimization and only 5% for geometric optimization. Conclusions: Anatomy-based IPSA optimization proved to be the superior technique and also the fastest for reducing the dose to the organs at risk without compromising the target coverage.

  8. Performance of a parallel algorithm for standard cell placement on the Intel Hypercube

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1987-01-01

    A parallel simulated annealing algorithm for standard cell placement that is targeted to run on the Intel Hypercube is presented. A tree broadcasting strategy that is used extensively in our algorithm for updating cell locations in the parallel environment is presented. Studies on the performance of our algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms.

  9. A performance comparison of integration algorithms in simulating flexible structures

    NASA Technical Reports Server (NTRS)

    Howe, R. M.

    1989-01-01

    Asymptotic formulas for the characteristic root errors as well as transfer function gain and phase errors are presented for a number of traditional and new integration methods. Normalized stability regions in the lambda h plane are compared for the various methods. In particular, it is shown that a modified form of Euler integration with root matching is an especially efficient method for simulating lightly-damped structural modes. The method has been used successfully for structural bending modes in the real-time simulation of missiles. Performance of this algorithm is compared with other special algorithms, including the state-transition method. A predictor-corrector version of the modified Euler algorithm permits it to be extended to the simulation of nonlinear models of the type likely to be obtained when using the discretized structure approach. Performance of the different integration methods is also compared for integration step sizes larger than those for which the asymptotic formulas are valid. It is concluded that many traditional integration methods, such as RD-4, are not competitive in the simulation of lightly damped structures.

  10. Time parallelization of plasma simulations using the parareal algorithm

    SciTech Connect

    Samaddar, D.; Houlberg, Wayne A; Berry, Lee A; Elwasif, Wael R; Huysmans, G; Batchelor, Donald B

    2011-01-01

    Simulation of fusion plasmas involve a broad range of timescales. In magnetically confined plasmas, such as in ITER, the timescale associated with the microturbulence responsible for transport and confinement timescales vary by an order of 10^6 10^9. Simulating this entire range of timescales is currently impossible, even on the most powerful supercomputers available. Space parallelization has so far been the most common approach to solve partial differential equations. Space parallelization alone has led to computational saturation for fluid codes, which means that the walltime for computaion does not linearly decrease with the increasing number of processors used. The application of the parareal algorithm to simulations of fusion plasmas ushers in a new avenue of parallelization, namely temporal parallelization. The algorithm has been successfully applied to plasma turbulence simulations, prior to which it has been applied to other relatively simpler problems. This work explores the extension of the applicability of the parareal algorithm to ITER relevant problems, starting with a diffusion-convection model.

  11. Simulations and measurements of annealed pyrolytic graphite-metal composite baseplates

    NASA Astrophysics Data System (ADS)

    Streb, F.; Ruhl, G.; Schubert, A.; Zeidler, H.; Penzel, M.; Flemmig, S.; Todaro, I.; Squatrito, R.; Lampke, T.

    2016-03-01

    We investigated the usability of anisotropic materials as inserts in aluminum-matrix-composite baseplates for typical high performance power semiconductor modules using finite-element simulations and transient plane source measurements. For simulations, several physical modules can be used, which are suitable for different thermal boundary conditions. By comparing different modules and options of heat transfer we found non-isothermal simulations to be closest to reality for temperature distribution at the surface of the heat sink. We optimized the geometry of the graphite inserts for best heat dissipation and based on these results evaluated the thermal resistance of a typical power module using calculation time optimized steady-state simulations. Here we investigated the influence of thermal contact conductance (TCC) between metal matrix and inserts on the heat dissipation. We found improved heat dissipation compared to the plain metal baseplate for a TCC of 200 kW/m2/K and above.To verify the simulations we evaluated cast composite baseplates with two different insert geometries and measured their averaged lateral thermal conductivity using a transient plane source (HotDisk) technique at room temperature. For the composite baseplate we achieved local improvements in heat dissipation compared to the plain metal baseplate.

  12. Molecular dynamics algorithm enforcing energy conservation for microcanonical simulations.

    PubMed

    Salueña, Clara; Avalos, Josep Bonet

    2014-05-01

    A reversible algorithm [enforced energy conservation (EEC)] that enforces total energy conservation for microcanonical simulations is presented. The key point is the introduction of the discrete-gradient method to define the forces from the conservative potentials, instead of the direct use of the force field at the actual position of the particle. We have studied the performance and accuracy of the EEC in two cases, namely Lennard-Jones fluid and a simple electrolyte model. Truncated potentials that usually induce inaccuracies in energy conservation are used. In particular, the reaction field approach is used in the latter. The EEC is able to preserve energy conservation for a long time, and, in addition, it performs better than the Verlet algorithm for these kinds of simulations.

  13. Potts-model grain growth simulations: Parallel algorithms and applications

    SciTech Connect

    Wright, S.A.; Plimpton, S.J.; Swiler, T.P.

    1997-08-01

    Microstructural morphology and grain boundary properties often control the service properties of engineered materials. This report uses the Potts-model to simulate the development of microstructures in realistic materials. Three areas of microstructural morphology simulations were studied. They include the development of massively parallel algorithms for Potts-model grain grow simulations, modeling of mass transport via diffusion in these simulated microstructures, and the development of a gradient-dependent Hamiltonian to simulate columnar grain growth. Potts grain growth models for massively parallel supercomputers were developed for the conventional Potts-model in both two and three dimensions. Simulations using these parallel codes showed self similar grain growth and no finite size effects for previously unapproachable large scale problems. In addition, new enhancements to the conventional Metropolis algorithm used in the Potts-model were developed to accelerate the calculations. These techniques enable both the sequential and parallel algorithms to run faster and use essentially an infinite number of grain orientation values to avoid non-physical grain coalescence events. Mass transport phenomena in polycrystalline materials were studied in two dimensions using numerical diffusion techniques on microstructures generated using the Potts-model. The results of the mass transport modeling showed excellent quantitative agreement with one dimensional diffusion problems, however the results also suggest that transient multi-dimension diffusion effects cannot be parameterized as the product of the grain boundary diffusion coefficient and the grain boundary width. Instead, both properties are required. Gradient-dependent grain growth mechanisms were included in the Potts-model by adding an extra term to the Hamiltonian. Under normal grain growth, the primary driving term is the curvature of the grain boundary, which is included in the standard Potts-model Hamiltonian.

  14. Improved Contact Algorithms for Implicit FE Simulation of Sheet Forming

    NASA Astrophysics Data System (ADS)

    Zhuang, S.; Lee, M. G.; Keum, Y. T.; Wagoner, R. H.

    2007-05-01

    Implicit finite element simulations of sheet forming processes do not always converge, particularly for complex tool geometries and rapidly changing contact. The SHEET-3 program exhibits remarkable stability and strong convergence by use of its special N-CFS algorithm and a sheet normal defined by the mesh, but these features alone do not always guarantee convergence and accuracy. An improved contact capability within the N-CFS algorithm is formulated taking into account sheet thickness within the framework of shell elements. Two imaginary surfaces offset from the mid-plane of shell elements are implemented along the mesh normal direction. An efficient contact searching algorithm based on the mesh-patch tool description is formulated along the mesh normal direction. The contact search includes a general global searching procedure and a new local searching procedure enforcing the contact condition along the mesh normal direction. The processes of unconstrained cylindrical bending and drawing through a drawbead are simulated to verify the accuracy and convergence of the improved contact algorithm.

  15. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    SciTech Connect

    Xiu, Dongbin

    2016-06-21

    The focus of the project is the development of mathematical methods and high-performance com- putational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly e cient and scalable numer- ical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  16. A fast MPP algorithm for Ising spin exchange simulations

    NASA Technical Reports Server (NTRS)

    Sullivan, Francis; Mountain, Raymond D.

    1987-01-01

    A very efficient massively parallel processor (MPP) algorithm is described for performing one important class of Ising spin simulations. Results and physical significance of MPP calculations using the method described is discussed elsewhere. A few comments, however, are made on the problem under study and results so far are reported. Ted Einstein provided guidance in interpreting the initial results and in suggesting calculations to perform.

  17. SMMR Simulator radiative transfer calibration model. 2: Algorithm development

    NASA Technical Reports Server (NTRS)

    Link, S.; Calhoon, C.; Krupp, B.

    1980-01-01

    Passive microwave measurements performed from Earth orbit can be used to provide global data on a wide range of geophysical and meteorological phenomena. A Scanning Multichannel Microwave Radiometer (SMMR) is being flown on the Nimbus-G satellite. The SMMR Simulator duplicates the frequency bands utilized in the spacecraft instruments through an amalgamate of radiometer systems. The algorithm developed utilizes data from the fall 1978 NASA CV-990 Nimbus-G underflight test series and subsequent laboratory testing.

  18. Sampling of general correlators in worm-algorithm based simulations

    NASA Astrophysics Data System (ADS)

    Rindlisbacher, Tobias; Åkerlund, Oscar; de Forcrand, Philippe

    2016-08-01

    Using the complex ϕ4-model as a prototype for a system which is simulated by a worm algorithm, we show that not only the charged correlator <ϕ* (x) ϕ (y) >, but also more general correlators such as < | ϕ (x) | | ϕ (y) | > or < arg ⁡ (ϕ (x)) arg ⁡ (ϕ (y)) >, as well as condensates like < | ϕ | >, can be measured at every step of the Monte Carlo evolution of the worm instead of on closed-worm configurations only. The method generalizes straightforwardly to other systems simulated by worms, such as spin or sigma models.

  19. A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation

    NASA Astrophysics Data System (ADS)

    Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava

    2015-12-01

    In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.

  20. Reliable prediction of adsorption isotherms via genetic algorithm molecular simulation.

    PubMed

    LoftiKatooli, L; Shahsavand, A

    2017-01-01

    Conventional molecular simulation techniques such as grand canonical Monte Carlo (GCMC) strictly rely on purely random search inside the simulation box for predicting the adsorption isotherms. This blind search is usually extremely time demanding for providing a faithful approximation of the real isotherm and in some cases may lead to non-optimal solutions. A novel approach is presented in this article which does not use any of the classical steps of the standard GCMC method, such as displacement, insertation, and removal. The new approach is based on the well-known genetic algorithm to find the optimal configuration for adsorption of any adsorbate on a structured adsorbent under prevailing pressure and temperature. The proposed approach considers the molecular simulation problem as a global optimization challenge. A detailed flow chart of our so-called genetic algorithm molecular simulation (GAMS) method is presented, which is entirely different from traditions molecular simulation approaches. Three real case studies (for adsorption of CO2 and H2 over various zeolites) are borrowed from literature to clearly illustrate the superior performances of the proposed method over the standard GCMC technique. For the present method, the average absolute values of percentage errors are around 11% (RHO-H2), 5% (CHA-CO2), and 16% (BEA-CO2), while they were about 70%, 15%, and 40% for the standard GCMC technique, respectively.

  1. An improved sink particle algorithm for SPH simulations

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Walch, S.; Whitworth, A. P.

    2013-04-01

    Numerical simulations of star formation frequently rely on the implementation of sink particles: (a) to avoid expending computational resource on the detailed internal physics of individual collapsing protostars, (b) to derive mass functions, binary statistics and clustering kinematics (and hence to make comparisons with observation), and (c) to model radiative and mechanical feedback; sink particles are also used in other contexts, for example to represent accreting black holes in galactic nuclei. We present a new algorithm for creating and evolving sink particles in smoothed particle hydrodynamic (SPH) simulations, which appears to represent a significant improvement over existing algorithms - particularly in situations where sinks are introduced after the gas has become optically thick to its own cooling radiation and started to heat up by adiabatic compression. (i) It avoids spurious creation of sinks. (ii) It regulates the accretion of matter on to a sink so as to mitigate non-physical perturbations in the vicinity of the sink. (iii) Sinks accrete matter, but the associated angular momentum is transferred back to the surrounding medium. With the new algorithm - and modulo the need to invoke sufficient resolution to capture the physics preceding sink formation - the properties of sinks formed in simulations are essentially independent of the user-defined parameters of sink creation, or the number of SPH particles used.

  2. Optimization Via Open System Quantum Annealing

    DTIC Science & Technology

    2016-01-07

    mapping between the Ising spin glass partition function and circuit model decision problems, discovered in a previous ARO Quantum Algorithms funded...of tunneling in providing quantum annealing speedup over classical algorithms • Characterized the effects of classical hardness on the performance...15 Annual APS March meeting, Tutorial on Quantum Annealing 12/14 Quantum Sensing, Metrology, and Algorithms Workshop, Northrop Grumman, Los

  3. Motion Cueing Algorithm Modification for Improved Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Zaychik, Kirill; Kelly, Lon C.; Houck, Jacob

    2009-01-01

    Atmospheric turbulence cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. Cardullo and Ellor initially addressed this problem by directly porting the turbulence model output to the motion system. Reid and Robinson addressed the problem by employing a parallel aircraft model, which is only stimulated by the turbulence inputs and adding a filter specially designed to pass the higher turbulence frequencies. There have been advances in motion cueing algorithm development at the Man-Machine Systems Laboratory, at SUNY Binghamton. In particular, the system used to generate turbulence cues has been studied. The Reid approach, implemented by Telban and Cardullo, was employed to augment the optimal motion cueing algorithm installed at the NASA LaRC Simulation Laboratory, driving the Visual Motion Simulator. In this implementation, the output of the primary flight channel was added to the output of the turbulence channel and then sent through a non-linear cueing filter. The cueing filter is an adaptive filter; therefore, it is not desirable for the output of the turbulence channel to be augmented by this type of filter. The likelihood of the signal becoming divergent was also an issue in this design. After testing on-site it became apparent that the architecture of the turbulence algorithm was generating unacceptable cues. As mentioned above, this cueing algorithm comprised a filter that was designed to operate at low bandwidth. Therefore, the turbulence was also filtered, augmenting the cues generated by the model. If any filtering is to be done to the turbulence, it will utilize a filter with a much higher bandwidth, above the frequencies produced by the aircraft response to turbulence. The authors have developed an implementation wherein only the signal from the primary flight channel passes through the nonlinear cueing filter. This paper discusses three

  4. Massively parallel algorithms for trace-driven cache simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  5. Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.

    2012-01-01

    Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.

  6. To Propose an Algorithm for Team Forming: Simulated Annealing K Team-Forming Algorithm for Heterogeneous Grouping.

    ERIC Educational Resources Information Center

    Zhi-Feng Liu, Eric

    2005-01-01

    In recent studies, some researchers were eager for the answer of how to group a perfectly dream team. There are various grouping methods, e.g. random assignment, homogeneous grouping with personality or achievement and heterogeneous grouping with personality or achievement, were proposed. Some instructors could put some students in a team better…

  7. An Initial Examination for Verifying Separation Algorithms by Simulation

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Neogi, Natasha; Herencia-Zapana, Heber

    2012-01-01

    An open question in algorithms for aircraft is what can be validated by simulation where the simulation shows that the probability of undesirable events is below some given level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first proposes a goal based on the number of flights per year in several regions. The paper examines the probabilistic interpretation of this goal and computes the number of trials needed to establish it at an equivalent confidence level. Since any simulation is likely to consider the algorithms for only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. This paper is an initial effort, and as such, it considers separation maneuvers, which are elementary but include numerous aspects of aircraft behavior. The scenario includes decisions under uncertainty since the position of each aircraft is only known to the other by broadcasting where GPS believes each aircraft to be (ADS-B). Each aircraft operates under feedback control with perturbations. It is shown that a scenario three or four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.

  8. Annealing Simulations of Nano-Sized Amorphous Structures in SiC

    SciTech Connect

    Gao, Fei; Devanathan, Ram; Zhang, Yanwen; Weber, William J.

    2005-01-01

    A two-dimensional model of a nano-sized amorphous layer embedded in a perfect crystal has been developed, and the amorphous-to-crystalline (a-c) transition in 3C-SiC at 2000 K has been studied using molecular dynamics methods, with simulation times of up to 88 ns. Analysis of the a-c interfaces reveals that the recovery of the bond defects existing at the a-c interfaces plays an important role in recrystallization. During the recrystallization process, a second ordered phase, crystalline 2H-SiC, can be nucleated and grow, and is stable for long simulation times. The crystallization mechanism is a two-step process that is separated by a longer period of second-phase stability. The kink sites formed at the interfaces between 2H- and 3C-SiC provide a low energy path for 2H-SiC atoms to transfer to 3C-SiC atoms, which can be defined as a solid-phase epitaxial transformation (SPET). It is observed that the nano-sized amorphous structure can be fully recrystallized at 2000 K in SiC, which is in agreement with experimental observations.

  9. Displacement cascades and defects annealing in tungsten, Part I: Defect database from molecular dynamics simulations

    SciTech Connect

    Setyawan, Wahyu; Nandipati, Giridhar; Roche, Kenneth J.; Heinisch, Howard L.; Wirth, Brian D.; Kurtz, Richard J.

    2015-07-01

    Molecular dynamics simulations have been used to generate a comprehensive database of surviving defects due to displacement cascades in bulk tungsten. Twenty-one data points of primary knock-on atom (PKA) energies ranging from 100 eV (sub-threshold energy) to 100 keV (~780×Ed, where Ed = 128 eV is the average displacement threshold energy) have been completed at 300 K, 1025 K and 2050 K. Within this range of PKA energies, two regimes of power-law energy-dependence of the defect production are observed. A distinct power-law exponent characterizes the number of Frenkel pairs produced within each regime. The two regimes intersect at a transition energy which occurs at approximately 250×Ed. The transition energy also marks the onset of the formation of large self-interstitial atom (SIA) clusters (size 14 or more). The observed defect clustering behavior is asymmetric, with SIA clustering increasing with temperature, while the vacancy clustering decreases. This asymmetry increases with temperature such that at 2050 K (~0.5Tm) practically no large vacancy clusters are formed, meanwhile large SIA clusters appear in all simulations. The implication of such asymmetry on the long-term defect survival and damage accumulation is discussed. In addition, <100> {110} SIA loops are observed to form directly in the highest energy cascades, while vacancy <100> loops are observed to form at the lowest temperature and highest PKA energies, although the appearance of both the vacancy and SIA loops with Burgers vector of <100> type is relatively rare.

  10. Dielectric functions of Pd and Zr transition metals: an application of Drude-Lorentz models with simulated annealing optimization.

    PubMed

    Vargas, William E

    2017-02-01

    An accepted-probability-controlled simulated annealing (APCSA) method has shown to be a valuable tool to describe, in parametric form, by means of an extended Drude-Lorentz model, the dielectric function of several metals through infrared, visible, and ultraviolet photon energies [Appl. Opt.37, 5271 (1998)APOPAI0003-693510.1364/AO.37.005271]. In this work, an improved APCSA approach is used to estimate the parameters involved in an extended Drude-Lorentz type model which incorporates the dielectric constant due to a background electronic polarization in the Drude term and the normalization of the individual oscillation strengths involved in the Lorentz contributions to the dielectric function. This last approach allows us to introduce a new parameter z to be optimized: the number density ratio, i.e., the ratio between number density of conduction electrons and number density of metal ions. From the optimization of the z value within this novel approach, we evaluate other parameters: electrical resistivity, electron mean free path, effective mass of conduction electrons and relaxation time, Fermi energy, electronic density of states at the Fermi level, and electronic heat capacity coefficient. Application of the model is carried out to describe the dielectric functions of two transition metals, Pd and Zr, through ultraviolet, visible, and infrared photon energies.

  11. Optoelectronic analogs of self-programming neural nets - Architecture and methodologies for implementing fast stochastic learning by simulated annealing

    NASA Technical Reports Server (NTRS)

    Farhat, Nabil H.

    1987-01-01

    Self-organization and learning is a distinctive feature of neural nets and processors that sets them apart from conventional approaches to signal processing. It leads to self-programmability which alleviates the problem of programming complexity in artificial neural nets. In this paper architectures for partitioning an optoelectronic analog of a neural net into distinct layers with prescribed interconnectivity pattern to enable stochastic learning by simulated annealing in the context of a Boltzmann machine are presented. Stochastic learning is of interest because of its relevance to the role of noise in biological neural nets. Practical considerations and methodologies for appreciably accelerating stochastic learning in such a multilayered net are described. These include the use of parallel optical computing of the global energy of the net, the use of fast nonvolatile programmable spatial light modulators to realize fast plasticity, optical generation of random number arrays, and an adaptive noisy thresholding scheme that also makes stochastic learning more biologically plausible. The findings reported predict optoelectronic chips that can be used in the realization of optical learning machines.

  12. The solution conformation of the antibacterial peptide cecropin A: A nuclear magnetic resonance and dynamical simulated annealing study

    SciTech Connect

    Holak, T.A.; Gronenborn, A.M.; Clore, G.M. ); Engstroem, A.; Kraulis, P.J.; Lindeberg, G.; Bennich, H.; Jones, T.A. )

    1988-10-04

    The solution conformation of the antibacterial polypeptide cecropin A from the Cecropia moth is investigated by nuclear magnetic resonance (NMR) spectroscopy under conditions where it adopts a fully ordered structure, as judged by previous circular dichroism studies. By use of a combination of two-dimensional NMR techniques the {sup 1}H NMR spectrum of cecropin A is completely assigned. A set of 243 approximate interproton distance restraints is derived from nuclear Overhauser enhancement (NOE) measurements. These, together with 32 restraints for the 16 intrahelical hydrogen bonds identified on the basis of the pattern of short-range NOEs, form the basis of a three-dimensional structure determination by dynamical simulated annealing. The calculations are carried out starting from three initial structures, an {alpha}-helix, an extended {beta}-strand, and a mixed {alpha}/{beta} structure. Seven independent structures are computed from each starting structure by using a different random number seeds for the assignments of the initial velocities. Analysis of the 21 converged structure indicates that there are two helical regions extending from residues 5 to 21 and from residues 24 to 37 which are very well defined in terms of both atomic root mean square differences and backbone torsion angles. The long axes of the two helices lie in two planes, which are at an angle of 70-100{degree} to each other. The orientation of the helices within these planes, however, cannot be determined due to the paucity of NOEs between the two helices.

  13. Optoelectronic analogs of self-programming neural nets: architecture and methodologies for implementing fast stochastic learning by simulated annealing.

    PubMed

    Farhat, N H

    1987-12-01

    Self-organization and learning is a distinctive feature of neural nets and processors that sets them apart from conventional approaches to signal processing. It leads to self-programmability which alleviates the problem of programming complexity in artificial neural nets. In this paper architectures for partitioning an optoelectronic analog of a neural net into distinct layers with prescribed interconnectivity pattern to enable stochastic learning by simulated annealing in the context of a Boltzmann machine are presented. Stochastic learning is of interest because of its relevance to the role of noise in biological neural nets. Practical considerations and methodologies for appreciably accelerating stochastic learning in such a multilayered net are described. These include the use of parallel optical computing of the global energy of the net, the use of fast nonvolatile programmable spatial light modulators to realize fast plasticity, optical generation of random number arrays, and an adaptive noisy thresholding scheme that also makes stochastic learning more biologically plausible. The findings reported predict optoelectronic chips that can be used in the realization of optical learning machines.

  14. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  15. Control algorithm for multiscale flow simulations of water

    NASA Astrophysics Data System (ADS)

    Kotsalis, Evangelos M.; Walther, Jens H.; Kaxiras, Efthimios; Koumoutsakos, Petros

    2009-04-01

    We present a multiscale algorithm to couple atomistic water models with continuum incompressible flow simulations via a Schwarz domain decomposition approach. The coupling introduces an inhomogeneity in the description of the atomistic domain and prevents the use of periodic boundary conditions. The use of a mass conserving specular wall results in turn to spurious oscillations in the density profile of the atomistic description of water. These oscillations can be eliminated by using an external boundary force that effectively accounts for the virial component of the pressure. In this Rapid Communication, we extend a control algorithm, previously introduced for monatomic molecules, to the case of atomistic water and demonstrate the effectiveness of this approach. The proposed computational method is validated for the cases of equilibrium and Couette flow of water.

  16. Control algorithm for multiscale flow simulations of water.

    PubMed

    Kotsalis, Evangelos M; Walther, Jens H; Kaxiras, Efthimios; Koumoutsakos, Petros

    2009-04-01

    We present a multiscale algorithm to couple atomistic water models with continuum incompressible flow simulations via a Schwarz domain decomposition approach. The coupling introduces an inhomogeneity in the description of the atomistic domain and prevents the use of periodic boundary conditions. The use of a mass conserving specular wall results in turn to spurious oscillations in the density profile of the atomistic description of water. These oscillations can be eliminated by using an external boundary force that effectively accounts for the virial component of the pressure. In this Rapid Communication, we extend a control algorithm, previously introduced for monatomic molecules, to the case of atomistic water and demonstrate the effectiveness of this approach. The proposed computational method is validated for the cases of equilibrium and Couette flow of water.

  17. Searching for stable Si(n)C(n) clusters: combination of stochastic potential surface search and pseudopotential plane-wave Car-Parinello simulated annealing simulations.

    PubMed

    Duan, Xiaofeng F; Burggraf, Larry W; Huang, Lingyu

    2013-07-22

    To find low energy Si(n)C(n) structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA). We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each Si(n)C(n) cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to Si(n)C(n) (n = 4-12) clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each Si(n)C(n) cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.

  18. Concurrent Algorithm For Particle-In-Cell Simulations

    NASA Technical Reports Server (NTRS)

    Liewer, Paulett C.; Decyk, Viktor K.

    1990-01-01

    Separate decompositions used for particle-motion and field calculations. General Concurrent Particle-in-Cell (GCPIC) algorithm used to implement motions of individual plasma particles (ions and electrons) under influence of particle-in-cell (PIC) computer codes on concurrent processors. Simulates motions of individual plasma particles under influence of electromagnetic fields generated by particles themselves. Performed to study variety of nonlinear problems in plasma physics, including magnetic and inertial fusion, plasmas in outer space, propagation of electron and ion beams, free-electron lasers, and particle accelerators.

  19. A gene network simulator to assess reverse engineering algorithms.

    PubMed

    Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2009-03-01

    In the context of reverse engineering of biological networks, simulators are helpful to test and compare the accuracy of different reverse-engineering approaches in a variety of experimental conditions. A novel gene-network simulator is presented that resembles some of the main features of transcriptional regulatory networks related to topology, interaction among regulators of transcription, and expression dynamics. The simulator generates network topology according to the current knowledge of biological network organization, including scale-free distribution of the connectivity and clustering coefficient independent of the number of nodes in the network. It uses fuzzy logic to represent interactions among the regulators of each gene, integrated with differential equations to generate continuous data, comparable to real data for variety and dynamic complexity. Finally, the simulator accounts for saturation in the response to regulation and transcription activation thresholds and shows robustness to perturbations. It therefore provides a reliable and versatile test bed for reverse engineering algorithms applied to microarray data. Since the simulator describes regulatory interactions and expression dynamics as two distinct, although interconnected aspects of regulation, it can also be used to test reverse engineering approaches that use both microarray and protein-protein interaction data in the process of learning. A first software release is available at http://www.dei.unipd.it/~dicamill/software/netsim as an R programming language package.

  20. Simulated optimum gate and encapsulant properties for a refractory gate GaAs metal-semiconductor field effect transistor during annealing

    SciTech Connect

    Kitajo, S.; Kanamori, M.

    1993-03-01

    The stress distribution in a refractory gate GaAs substrate during annealing was calculated by computer simulation, using the finite element method. Simulations were used to investigate the correlation between the thermal expansion coefficient of the gate and the encapsulant internal stress. The condition in which minimum or no dislocations were induced into the GaAs substrate were studied. It was demonstrate that the best thermal expansion coefficient value of the gate was close to the value that was reported for tungsten. It was concluded that, by controlling the encapsulant thermal stress of SiO{sub 2} or SiN encapsulant, during high temperature annealing, a dislocation-free GaAs substrate could be obtained. 6 refs., 6 figs., 1 tab.

  1. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    NASA Astrophysics Data System (ADS)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  2. Constant-complexity stochastic simulation algorithm with optimal binning

    SciTech Connect

    Sanft, Kevin R.; Othmer, Hans G.

    2015-08-21

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  3. Constant-complexity stochastic simulation algorithm with optimal binning.

    PubMed

    Sanft, Kevin R; Othmer, Hans G

    2015-08-21

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie's Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  4. Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2008-01-01

    Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.

  5. Verifying Algorithms for Autonomous Aircraft by Simulation Generalities and Example

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2010-01-01

    An open question in Air Traffic Management is what procedures can be validated by simulation where the simulation shows that the probability of undesirable events is below the required level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first examines the probabilistic interpretation of a typical requirement by a regulatory agency and computes the number of trials needed to establish the requirement at an equivalent confidence level. Since any simulation is likely to consider only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. The paper establishes a separation algorithm at the required confidence level where the aircraft operates under feedback control as is subject to perturbations. There is a discussion where it is shown that a scenario three of four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.

  6. On constructing optimistic simulation algorithms for the discrete event system specification

    SciTech Connect

    Nutaro, James J

    2008-01-01

    This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models.

  7. Algorithmic approach to simulate Hamiltonian dynamics and an NMR simulation of quantum state transfer

    NASA Astrophysics Data System (ADS)

    Ajoy, Ashok; Rao, Rama Koteswara; Kumar, Anil; Rungta, Pranaw

    2012-03-01

    We propose an iterative algorithm to simulate the dynamics generated by any n-qubit Hamiltonian. The simulation entails decomposing the unitary time evolution operator U (unitary) into a product of different time-step unitaries. The algorithm product-decomposes U in a chosen operator basis by identifying a certain symmetry of U that is intimately related to the number of gates in the decomposition. We illustrate the algorithm by first obtaining a polynomial decomposition in the Pauli basis of the n-qubit quantum state transfer unitary by Di Franco [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.101.230502 101, 230502 (2008)] that transports quantum information from one end of a spin chain to the other, and then implement it in nuclear magnetic resonance to demonstrate that the decomposition is experimentally viable. We further experimentally test the resilience of the state transfer to static errors in the coupling parameters of the simulated Hamiltonian. This is done by decomposing and simulating the corresponding imperfect unitaries.

  8. MTR-Fill: A Simulated Annealing-Based X-Filling Technique to Reduce Test Power Dissipation for scan-Based Designs

    NASA Astrophysics Data System (ADS)

    Song, Dong-Sup; Ahn, Jin-Ho; Kim, Tae-Jin; Kang, Sungho

    This paper proposes the minimum transition random X-filling (MTR-fill) technique, which is a new X-filling method, to reduce the amount of power dissipation during scan-based testing. In order to model the amount of power dissipated during scan load/unload cycles, the total weighted transition metric (TWTM) is introduced, which is calculated by the sum of the weighted transitions in a scan-load of a test pattern and a scan-unload of a test response. The proposed MTR-fill is implemented by simulated annealing method. During the annealing process, the TWTM of a pair of test patterns and test responses are minimized. Simultaneously, the MTR-fill attempts to increase the randomness of test patterns in order to reduce the number of test patterns needed to achieve adequate fault coverage. The effectiveness of the proposed technique is shown through experiments for ISCAS' 89 benchmark circuits.

  9. Analogue Simulation and Orbital Solving Algorithm of Astrometric Exoplanet Detection

    NASA Astrophysics Data System (ADS)

    Huang, P. H.; Ji, J. H.

    2016-09-01

    Astrometry is an effective method to detect exoplanets. It has many advantages that other detection methods do not bear, such as providing three dimensional planetary orbit and determining the planetary mass. Astrometry will enrich the sample of exoplanets. As the high-precision astrometric satellite Gaia (Global Astrometry interferometer for Astrophysics) was launched in 2013, there will be abundant long-period Jupiter-size planets to be discovered by Gaia. In this paper, we specify the α Centauri A, HD 62509, and GJ 876 systems, and generate the synthetic astrometric data with the single astrometric precision of Gaia. Then we use the Lomb-Scargle periodogram to analyse the signature of planets and the Markov Chain Monte Carlo (MCMC) algorithm to fit the orbit of planets. The simulation results are well coincide with the initial solutions.

  10. Magnetic Storm Simulation With Multiple Ion Fluids: Algorithm

    NASA Astrophysics Data System (ADS)

    Toth, G.; Glocer, A.; Gombosi, T.

    2008-12-01

    We describe our progress in extending the capabilities of the BATS-R-US MHD code to model multiple ion fluids. We solve the full multiion equations with no assumptions about the relative motion of the ion fluids. We discuss the numerical difficulties and the algorithmic solutions: the use of a total ion fluid in combination with the individual ion fluids, the use of point-implicit source terms with analytic Jacobian, using a simple criterion to separate the single-ion and multiion regions in our magnetosphere applications, and an artificial friction term to limit the relative velocities of the ion fluids to reasonable values. This latter term is used to mimic the effect of two-stream instabilities in a crude manner. The new code is fully integrated into the Space Weather Modeling Framework and it has been coupled with the ionosphere, inner magnetosphere and polar wind models to simulate the May 4 1998 magnetic storm.

  11. An algorithm to build mock galaxy catalogues using MICE simulations

    NASA Astrophysics Data System (ADS)

    Carretero, J.; Castander, F. J.; Gaztañaga, E.; Crocce, M.; Fosalba, P.

    2015-02-01

    We present a method to build mock galaxy catalogues starting from a halo catalogue that uses halo occupation distribution (HOD) recipes as well as the subhalo abundance matching (SHAM) technique. Combining both prescriptions we are able to push the absolute magnitude of the resulting catalogue to fainter luminosities than using just the SHAM technique and can interpret our results in terms of the HOD modelling. We optimize the method by populating with galaxies friends-of-friends dark matter haloes extracted from the Marenostrum Institut de Ciències de l'Espai dark matter simulations and comparing them to observational constraints. Our resulting mock galaxy catalogues manage to reproduce the observed local galaxy luminosity function and the colour-magnitude distribution as observed by the Sloan Digital Sky Survey. They also reproduce the observed galaxy clustering properties as a function of luminosity and colour. In order to achieve that, the algorithm also includes scatter in the halo mass-galaxy luminosity relation derived from direct SHAM and a modified Navarro-Frenk-White mass density profile to place satellite galaxies in their host dark matter haloes. Improving on general usage of the HOD that fits the clustering for given magnitude limited samples, our catalogues are constructed to fit observations at all luminosities considered and therefore for any luminosity subsample. Overall, our algorithm is an economic procedure of obtaining galaxy mock catalogues down to faint magnitudes that are necessary to understand and interpret galaxy surveys.

  12. Simulating Future GPS Clock Scenarios with Two Composite Clock Algorithms

    NASA Technical Reports Server (NTRS)

    Suess, Matthias; Matsakis, Demetrios; Greenhall, Charles A.

    2010-01-01

    Using the GPS Toolkit, the GPS constellation is simulated using 31 satellites (SV) and a ground network of 17 monitor stations (MS). At every 15-minutes measurement epoch, the monitor stations measure the time signals of all satellites above a parameterized elevation angle. Once a day, the satellite clock estimates the station and satellite clocks. The first composite clock (B) is based on the Brown algorithm, and is now used by GPS. The second one (G) is based on the Greenhall algorithm. The composite clock of G and B performance are investigated using three ground-clock models. Model C simulates the current GPS configuration, in which all stations are equipped with cesium clocks, except for masers at USNO and Alternate Master Clock (AMC) sites. Model M is an improved situation in which every station is equipped with active hydrogen masers. Finally, Models F and O are future scenarios in which the USNO and AMC stations are equipped with fountain clocks instead of masers. Model F is a rubidium fountain, while Model O is more precise but futuristic Optical Fountain. Each model is evaluated using three performance metrics. The timing-related user range error having all satellites available is the first performance index (PI1). The second performance index (PI2) relates to the stability of the broadcast GPS system time itself. The third performance index (PI3) evaluates the stability of the time scales computed by the two composite clocks. A distinction is made between the "Signal-in-Space" accuracy and that available through a GNSS receiver.

  13. Robotic space simulation integration of vision algorithms into an orbital operations simulation

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.

    1987-01-01

    In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.

  14. Simulated annealing reveals the kinetic activity of SGLT1, a member of the LeuT structural family.

    PubMed

    Longpré, Jean-Philippe; Sasseville, Louis J; Lapointe, Jean-Yves

    2012-10-01

    The Na(+)/glucose cotransporter (SGLT1) is the archetype of membrane proteins that use the electrochemical Na(+) gradient to drive uphill transport of a substrate. The crystal structure recently obtained for vSGLT strongly suggests that SGLT1 adopts the inverted repeat fold of the LeuT structural family for which several crystal structures are now available. What is largely missing is an accurate view of the rates at which SGLT1 transits between its different conformational states. In the present study, we used simulated annealing to analyze a large set of steady-state and pre-steady-state currents measured for human SGLT1 at different membrane potentials, and in the presence of different Na(+) and α-methyl-d-glucose (αMG) concentrations. The simplest kinetic model that could accurately reproduce the time course of the measured currents (down to the 2 ms time range) is a seven-state model (C(1) to C(7)) where the binding of the two Na(+) ions (C(4)→C(5)) is highly cooperative. In the forward direction (Na(+)/glucose influx), the model is characterized by two slow, electroneutral conformational changes (59 and 100 s(-1)) which represent reorientation of the free and of the fully loaded carrier between inside-facing and outside-facing conformations. From the inward-facing (C(1)) to the outward-facing Na-bound configuration (C(5)), 1.3 negative elementary charges are moved outward. Although extracellular glucose binding (C(5)→C(6)) is electroneutral, the next step (C(6)→C(7)) carries 0.7 positive charges inside the cell. Alignment of the seven-state model with a generalized model suggested by the structural data of the LeuT fold family suggests that electrogenic steps are associated with the movement of the so-called thin gates on each side of the substrate binding site. To our knowledge, this is the first model that can quantitatively describe the behavior of SGLT1 down to the 2 ms time domain. The model is highly symmetrical and in good agreement with the

  15. Implementation of low communication frequency 3D FFT algorithm for ultra-large-scale micromagnetics simulation

    NASA Astrophysics Data System (ADS)

    Tsukahara, Hiroshi; Iwano, Kaoru; Mitsumata, Chiharu; Ishikawa, Tadashi; Ono, Kanta

    2016-10-01

    We implement low communication frequency three-dimensional fast Fourier transform algorithms on micromagnetics simulator for calculations of a magnetostatic field which occupies a significant portion of large-scale micromagnetics simulation. This fast Fourier transform algorithm reduces the frequency of all-to-all communications from six to two times. Simulation times with our simulator show high scalability in parallelization, even if we perform the micromagnetics simulation using 32 768 physical computing cores. This low communication frequency fast Fourier transform algorithm enables world largest class micromagnetics simulations to be carried out with over one billion calculation cells.

  16. Combination of a latin hypercube sampling and of an simulated annealing method to optimize a physically based hydrological model

    NASA Astrophysics Data System (ADS)

    Robert, D.; Braud, I.; Cohard, J.; Zin, I.; Vauclin, M.

    2010-12-01

    correlations between several criteria (cost functions) and each parameter is necessary. Correlation coefficients can be chosen as sensitivity factors. The advantage is that they are independent of the chosen range of each parameter (unlike the regression coefficients). Nonetheless, as all parameters are varying simultaneously, total correlation coefficients are not providing the expected information, whereas partial correlation coefficients indicate parameter influence considering the other parameters. The multiple correlation coefficients between a result and parameters indicates the quality of the multiple linear regression. To optimize the parameter set, we use the simulated annealing method. Thus, it is possible to look for a global extremum (while most of other methods allow to find only a local extremum) as it includes a random part to explore a large scale of possibilities. It is its main advantage. The optimal parameter set allows a significant improve of the cost functions. Despite this enhancement, a certain degree of uncertainty still remains on the optimal parameter set.

  17. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

  18. A fast variable step size integration algorithm suitable for computer simulations of physiological systems

    NASA Technical Reports Server (NTRS)

    Neal, L.

    1981-01-01

    A simple numerical algorithm was developed for use in computer simulations of systems which are both stiff and stable. The method is implemented in subroutine form and applied to the simulation of physiological systems.

  19. Open-System Quantum Annealing in Mean-Field Models with Exponential Degeneracy*

    NASA Astrophysics Data System (ADS)

    Kechedzhi, Kostyantyn; Smelyanskiy, Vadim N.

    2016-04-01

    Real-life quantum computers are inevitably affected by intrinsic noise resulting in dissipative nonunitary dynamics realized by these devices. We consider an open-system quantum annealing algorithm optimized for such a realistic analog quantum device which takes advantage of noise-induced thermalization and relies on incoherent quantum tunneling at finite temperature. We theoretically analyze the performance of this algorithm considering a p -spin model that allows for a mean-field quasiclassical solution and, at the same time, demonstrates the first-order phase transition and exponential degeneracy of states, typical characteristics of spin glasses. We demonstrate that finite-temperature effects introduced by the noise are particularly important for the dynamics in the presence of the exponential degeneracy of metastable states. We determine the optimal regime of the open-system quantum annealing algorithm for this model and find that it can outperform simulated annealing in a range of parameters. Large-scale multiqubit quantum tunneling is instrumental for the quantum speedup in this model, which is possible because of the unusual nonmonotonous temperature dependence of the quantum-tunneling action in this model, where the most efficient transition rate corresponds to zero temperature. This model calculation is the first analytically tractable example where open-system quantum annealing algorithm outperforms simulated annealing, which can, in principle, be realized using an analog quantum computer.

  20. The clinical potential of high energy, intensity and energy modulated electron beams optimized by simulated annealing for conformal radiation therapy

    NASA Astrophysics Data System (ADS)

    Salter, Bill Jean, Jr.

    Purpose. The advent of new, so called IVth Generation, external beam radiation therapy treatment machines (e.g. Scanditronix' MM50 Racetrack Microtron) has raised the question of how the capabilities of these new machines might be exploited to produce extremely conformal dose distributions. Such machines possess the ability to produce electron energies as high as 50 MeV and, due to their scanned beam delivery of electron treatments, to modulate intensity and even energy, within a broad field. Materials and methods. Two patients with 'challenging' tumor geometries were selected from the patient archives of the Cancer Therapy and Research Center (CTRC), in San Antonio Texas. The treatment scheme that was tested allowed for twelve, energy and intensity modulated beams, equi-spaced about the patient-only intensity was modulated for the photon treatment. The elementary beams, incident from any of the twelve allowed directions, were assumed parallel, and the elementary electron beams were modeled by elementary beam data. The optimal arrangement of elementary beam energies and/or intensities was optimized by Szu-Hartley Fast Simulated Annealing Optimization. Optimized treatment plans were determined for each patient using both the high energy, intensity and energy modulated electron (HIEME) modality, and the 6 MV photon modality. The 'quality' of rival plans were scored using three different, popular objective functions which included Root Mean Square (RMS), Maximize Dose Subject to Dose and Volume Limitations (MDVL - Morrill et. al.), and Probability of Uncomplicated Tumor Control (PUTC) methods. The scores of the two optimized treatments (i.e. HIEME and intensity modulated photons) were compared to the score of the conventional plan with which the patient was actually treated. Results. The first patient evaluated presented a deeply located target volume, partially surrounding the spinal cord. A healthy right kidney was immediately adjacent to the tumor volume, separated

  1. A Non-Intrusive Algorithm for Sensitivity Analysis of Chaotic Flow Simulations

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2017-01-01

    We demonstrate a novel algorithm for computing the sensitivity of statistics in chaotic flow simulations to parameter perturbations. The algorithm is non-intrusive but requires exposing an interface. Based on the principle of shadowing in dynamical systems, this algorithm is designed to reduce the effect of the sampling error in computing sensitivity of statistics in chaotic simulations. We compare the effectiveness of this method to that of the conventional finite difference method.

  2. A Grand Canonical Monte Carlo-Brownian dynamics algorithm for simulating ion channels.

    PubMed Central

    Im, W; Seefeld, S; Roux, B

    2000-01-01

    A computational algorithm based on Grand Canonical Monte Carlo (GCMC) and Brownian Dynamics (BD) is described to simulate the movement of ions in membrane channels. The proposed algorithm, GCMC/BD, allows the simulation of ion channels with a realistic implementation of boundary conditions of concentration and transmembrane potential. The method is consistent with a statistical mechanical formulation of the equilibrium properties of ion channels (; Biophys. J. 77:139-153). The GCMC/BD algorithm is illustrated with simulations of simple test systems and of the OmpF porin of Escherichia coli. The approach provides a framework for simulating ion permeation in the context of detailed microscopic models. PMID:10920012

  3. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  4. Material growth in thermoelastic continua: Theory, algorithmics, and simulation

    NASA Astrophysics Data System (ADS)

    Vignes, Chet Monroe

    Within the medical community, there has been increasing interest in understanding material growth in biomaterials. Material growth is the capability of a biomaterial to gain or lose mass. This research interest is driven by the host of health implications and medical problems related to this unique biomaterial property. Health providers are keen to understand the role of growth in healing and recovery so that surgical techniques, medical procedures, and physical therapy may be designed and implemented to stimulate healing and minimize recovery time. With this motivation, research seeks to identify and model mechanisms of material growth as well as growth-inducing factors in biomaterials. To this end, a theoretical formulation of stress-induced volumetric material growth in thermoelastic continua is developed. The theory derives, without the classical continuum mechanics assumption of mass conservation, the balance laws governing the mechanics of solids capable of growth. Also, a proposed extension of classical thermodynamic theory provides a foundation for developing general constitutive relations. The theory is consistent in the sense that classical thermoelastic continuum theory is embedded as a special case. Two growth mechanisms, a kinematic and a constitutive contribution, coupled in the most general case of growth, are identified. This identification allows for the commonly employed special cases of density-preserving growth and volume-preserving growth to be easily recovered. In the theory, material growth is regulated by a three-surface activation criterion and corresponding flow rules. A simple model for rate-independent finite growth is proposed based on this formulation. The associated algorithmic implementation, including a method for solving the underlying differential/algebraic equations for growth, is examined in the context of an implicit finite element method. Selected numerical simulations are presented that showcase the predictive capacity of the

  5. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-09-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  6. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  7. Application of integration algorithms in a parallel processing environment for the simulation of jet engines

    NASA Technical Reports Server (NTRS)

    Krosel, S. M.; Milner, E. J.

    1982-01-01

    The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.

  8. Two-Dimensional Inlet Simulation Using a Diagonal Implicit Algorithm

    NASA Technical Reports Server (NTRS)

    Chaussee, D.S.; Pulliam, T. H.

    1981-01-01

    A modification of an implicit approximate-factorization finite-difference algorithm applied to the two-dimensional Euler and Navier-Stokes equations in general curvilinear coordinates is presented for supersonic freestream flow about and through inlets. The modification transforms the coupled system of equations Into an uncoupled diagonal form which requires less computation work. For steady-state applications the resulting diagonal algorithm retains the stability and accuracy characteristics of the original algorithm. Solutions are given for inviscid and laminar flow about a two-dimensional wedge inlet configuration. Comparisons are made between computed results and exact theory.

  9. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations.

    PubMed

    Nukala, Phani K V V; Kent, P R C

    2009-05-28

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N(2)) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N(2)) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN(2)) work and O(MN(2)) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.

  10. Synchronization Algorithms for Co-Simulation of Power Grid and Communication Networks

    SciTech Connect

    Ciraci, Selim; Daily, Jeffrey A.; Agarwal, Khushbu; Fuller, Jason C.; Marinovici, Laurentiu D.; Fisher, Andrew R.

    2014-09-11

    The ongoing modernization of power grids consists of integrating them with communication networks in order to achieve robust and resilient control of grid operations. To understand the operation of the new smart grid, one approach is to use simulation software. Unfortunately, current power grid simulators at best utilize inadequate approximations to simulate communication networks, if at all. Cooperative simulation of specialized power grid and communication network simulators promises to more accurately reproduce the interactions of real smart grid deployments. However, co-simulation is a challenging problem. A co-simulation must manage the exchange of informa- tion, including the synchronization of simulator clocks, between all simulators while maintaining adequate computational perfor- mance. This paper describes two new conservative algorithms for reducing the overhead of time synchronization, namely Active Set Conservative and Reactive Conservative. We provide a detailed analysis of their performance characteristics with respect to the current state of the art including both conservative and optimistic synchronization algorithms. In addition, we provide guidelines for selecting the appropriate synchronization algorithm based on the requirements of the co-simulation. The newly proposed algorithms are shown to achieve as much as 14% and 63% im- provement, respectively, over the existing conservative algorithm.

  11. Simulating Future GPS Clock Scenarios with Two Composite Clock Algorithms

    DTIC Science & Technology

    2010-11-01

    Alexandria, Virginia), pp. 223-242. [8] C. A. Greenhall, 2007, “A Kalman filter clock ensemble algorithm that admits measurement noise,” Metrologia ...43, S311-S321. [9] J. A. Davis, C. A. Greenhall, and P. W. Stacey, 2005, “A Kalman filter clock algorithm for use in the presence of flicker frequency modulation noise,” Metrologia , 42, 1-10.

  12. An optimization algorithm for multipath parallel allocation for service resource in the simulation task workflow.

    PubMed

    Wang, Zhiteng; Zhang, Hongjun; Zhang, Rui; Li, Yong; Zhang, Xuliang

    2014-01-01

    Service oriented modeling and simulation are hot issues in the field of modeling and simulation, and there is need to call service resources when simulation task workflow is running. How to optimize the service resource allocation to ensure that the task is complete effectively is an important issue in this area. In military modeling and simulation field, it is important to improve the probability of success and timeliness in simulation task workflow. Therefore, this paper proposes an optimization algorithm for multipath service resource parallel allocation, in which multipath service resource parallel allocation model is built and multiple chains coding scheme quantum optimization algorithm is used for optimization and solution. The multiple chains coding scheme quantum optimization algorithm is to extend parallel search space to improve search efficiency. Through the simulation experiment, this paper investigates the effect for the probability of success in simulation task workflow from different optimization algorithm, service allocation strategy, and path number, and the simulation result shows that the optimization algorithm for multipath service resource parallel allocation is an effective method to improve the probability of success and timeliness in simulation task workflow.

  13. Improved delay-leaping simulation algorithm for biochemical reaction systems with delays

    NASA Astrophysics Data System (ADS)

    Yi, Na; Zhuang, Gang; Da, Liang; Wang, Yifei

    2012-04-01

    In biochemical reaction systems dominated by delays, the simulation speed of the stochastic simulation algorithm depends on the size of the wait queue. As a result, it is important to control the size of the wait queue to improve the efficiency of the simulation. An improved accelerated delay stochastic simulation algorithm for biochemical reaction systems with delays, termed the improved delay-leaping algorithm, is proposed in this paper. The update method for the wait queue is effective in reducing the size of the queue as well as shortening the storage and access time, thereby accelerating the simulation speed. Numerical simulation on two examples indicates that this method not only obtains a more significant efficiency compared with the existing methods, but also can be widely applied in biochemical reaction systems with delays.

  14. Quantum Annealing at Google: Recent Learnings and Next Steps

    NASA Astrophysics Data System (ADS)

    Neven, Hartmut

    Recently we studied optimization problems with rugged energy landscapes that featured tall and narrow energy barriers separating energy minima. We found that for a crafted problem of this kind, called the weak-strong cluster glass, the D-Wave 2X processor achieves a significant advantage in runtime scaling relative to Simulated Annealing (SA). For instances with 945 variables this results in a time-to-99%-success-probability 109 times shorter than SA running on a single core. When comparing to the Quantum Monte Carlo (QMC) algorithm we only observe a pre-factor advantage but the pre-factor is large, about 106 for an implementation on a single core. We should note that we expect QMC to scale like physical quantum annealing only for problems for which the tunneling transitions can be described by a dominant purely imaginary instanton. We expect these findings to carry over to other problems with similar energy landscapes. A class of practical interest are k-th order binary optimization problems. We studied 4-spin problems using numerical methods and found again that simulated quantum annealing has better scaling than SA. This leaves us with a final step to achieve a wall clock speedup of practical relevance. We need to develop an annealing architecture that supports embedding of k-th order binary optimization in a manner that preserves the runtime advantage seen prior to embedding.

  15. Thermoluminescence curves simulation using genetic algorithm with factorial design

    NASA Astrophysics Data System (ADS)

    Popko, E. A.; Weinstein, I. A.

    2016-05-01

    The evolutionary approach is an effective optimization tool for numeric analysis of thermoluminescence (TL) processes to assess the microparameters of kinetic models and to determine its effects on the shape of TL peaks. In this paper, the procedure for tuning of genetic algorithm (GA) is presented. This approach is based on multifactorial experiment and allows choosing intrinsic mechanisms of evolutionary operators which provide the most efficient algorithm performance. The proposed method is tested by considering the “one trap-one recombination center” (OTOR) model as an example and advantages for approximation of experimental TL curves are shown.

  16. Cross-comparison of three electromyogram decomposition algorithms assessed with experimental and simulated data.

    PubMed

    Dai, Chenyun; Li, Yejin; Christie, Anita; Bonato, Paolo; McGill, Kevin C; Clancy, Edward A

    2015-01-01

    The reliability of clinical and scientific information provided by algorithms that automatically decompose the electromyogram (EMG) depends on the algorithms' accuracies. We used experimental and simulated data to assess the agreement and accuracy of three publicly available decomposition algorithms-EMGlab (McGill , 2005) (single channel data only), Fuzzy Expert (Erim and Lim, 2008) and Montreal (Florestal , 2009). Data consisted of quadrifilar needle EMGs from the tibialis anterior of 12 subjects at 10%, 20% and 50% maximum voluntary contraction (MVC); single channel needle EMGs from the biceps brachii of 10 controls and 10 patients during contractions just above threshold; and matched simulated data. Performance was assessed via agreement between pairs of algorithms for experimental data and accuracy with respect to the known decomposition for simulated data. For the quadrifilar experimental data, median agreements between the Montreal and Fuzzy Expert algorithms at 10%, 20%, and 50% MVC were 95%, 86%, and 64%, respectively. For the single channel control and patient data, median agreements between the three algorithm pairs were statistically similar at ∼ 97% and ∼ 92%, respectively. Accuracy on the simulated data exceeded this performance. Agreement/accuracy was strongly related to the Decomposability Index (Florestal , 2009). When agreement was high between algorithm pairs applied to simulated data, so was accuracy.

  17. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    PubMed Central

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  18. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.

    PubMed

    Vestergaard, Christian L; Génois, Mathieu

    2015-10-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.

  19. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP.

    PubMed

    Mohsen, Abdulqader M

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality.

  20. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP

    PubMed Central

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality. PMID:27999590

  1. DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM

    EPA Science Inventory

    The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...

  2. A Linac Simulation Code for Macro-Particles Tracking and Steering Algorithm Implementation

    SciTech Connect

    sun, yipeng

    2012-05-03

    In this paper, a linac simulation code written in Fortran90 is presented and several simulation examples are given. This code is optimized to implement linac alignment and steering algorithms, and evaluate the accelerator errors such as RF phase and acceleration gradient, quadrupole and BPM misalignment. It can track a single particle or a bunch of particles through normal linear accelerator elements such as quadrupole, RF cavity, dipole corrector and drift space. One-to-one steering algorithm and a global alignment (steering) algorithm are implemented in this code.

  3. A Modified Shake Algorithm for Maintaining Rigid Bonds in Molecular Dynamics Simulations of Large Molecules

    NASA Astrophysics Data System (ADS)

    Lambrakos, S. G.; Boris, J. P.; Oran, E. S.; Chandrasekhar, I.; Nagumo, M.

    1989-12-01

    We present a new modification of the SHAKE algorithm, MSHAKE, that maintains fixed distances in molecular dynamics simulations of polyatomic molecules. The MSHAKE algorithm, which is applied by modifying the leapfrog algorithm to include forces of constraint, computes an initial estimate of constraint forces, then iteratively corrects the constraint forces required to maintain the fixed distances. Thus MSHAKE should always converge more rapidly than SHAKE. Further, the explicit determination of the constraint forces at each timestep makes MSHAKE convenient for use in molecular dynamics simulations where bond stress is a significant dynamical quantity.

  4. Forced detection Monte Carlo algorithms for accelerated blood vessel image simulations.

    PubMed

    Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

    2009-03-01

    Two forced detection (FD) variance reduction Monte Carlo algorithms for image simulations of tissue-embedded objects with matched refractive index are presented. The principle of the algorithms is to force a fraction of the photon weight to the detector at each and every scattering event. The fractional weight is given by the probability for the photon to reach the detector without further interactions. Two imaging setups are applied to a tissue model including blood vessels, where the FD algorithms produce identical results as traditional brute force simulations, while being accelerated with two orders of magnitude. Extending the methods to include refraction mismatches is discussed.

  5. Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations

    SciTech Connect

    Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer

    2013-09-01

    Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both

  6. Stochastic simulation for imaging spatial uncertainty: Comparison and evaluation of available algorithms

    SciTech Connect

    Gotway, C.A.; Rutherford, B.M.

    1993-09-01

    Stochastic simulation has been suggested as a viable method for characterizing the uncertainty associated with the prediction of a nonlinear function of a spatially-varying parameter. Geostatistical simulation algorithms generate realizations of a random field with specified statistical and geostatistical properties. A nonlinear function is evaluated over each realization to obtain an uncertainty distribution of a system response that reflects the spatial variability and uncertainty in the parameter. Crucial management decisions, such as potential regulatory compliance of proposed nuclear waste facilities and optimal allocation of resources in environmental remediation, are based on the resulting system response uncertainty distribution. Many geostatistical simulation algorithms have been developed to generate the random fields, and each algorithm will produce fields with different statistical properties. These different properties will result in different distributions for system response, and potentially, different managerial decisions. The statistical properties of the resulting system response distributions are not completely understood, nor is the ability of the various algorithms to generate response distributions that adequately reflect the associated uncertainty. This paper reviews several of the algorithms available for generating random fields. Algorithms are compared in a designed experiment using seven exhaustive data sets with different statistical and geostatistical properties. For each exhaustive data set, a number of realizations are generated using each simulation algorithm. The realizations are used with each of several deterministic transfer functions to produce a cumulative uncertainty distribution function of a system response. The uncertainty distributions are then compared to the single value obtained from the corresponding exhaustive data set.

  7. A novel wavefront-based algorithm for numerical simulation of quasi-optical systems

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoling; Lou, Zheng; Hu, Jie; Zhou, Kangmin; Zuo, Yingxi; Shi, Shengcai

    2016-11-01

    A novel wavefront-based algorithm for the beam simulation of both reflective and refractive optics in a complicated quasi-optical system is proposed. The algorithm can be regarded as the extension to the conventional Physical Optics algorithm to handle dielectrics. Internal reflections are modeled in an accurate fashion, and coating and flossy materials can be treated in a straightforward manner. A parallel implementation of the algorithm has been developed and numerical examples show that the algorithm yields sufficient accuracy by comparing with experimental results, while the computational complexity is much less than the full-wave methods. The algorithm offers an alternative approach to the modeling of quasi-optical systems in addition to the Geometrical Optics modeling and full-wave methods.

  8. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    SciTech Connect

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  9. Sensitivity Simulation of Compressed Sensing Based Electronic Warfare Receiver Using Orthogonal Matching Pursuit Algorithm

    DTIC Science & Technology

    2016-02-01

    AFRL-RY-WP-TR-2016-0006 SENSITIVITY SIMULATION OF COMPRESSED SENSING BASED ELECTRONIC WARFARE RECEIVER USING ORTHOGONAL MATCHING PURSUIT...TITLE AND SUBTITLE SENSITIVITY SIMULATION OF COMPRESSED SENSING BASED ELECTRONIC WARFARE RECEIVER USING ORTHOGONAL MATCHING PURSUIT ALGORITHM 5a...August 2014. Report contains color. 14. ABSTRACT The wideband coverage of the traditional fast Fourier transform (FFT)-based electronic warfare

  10. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    NASA Astrophysics Data System (ADS)

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2015-06-01

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  11. Thermal Performance Simulation of MWNT/NR composites Based on Levenberg-Marquard Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Z. Z.; Liu, J. S.

    2017-02-01

    In this paper, Levenberg-Marquard algorithm was used to simulate thermal performance of aligned carbon nanotubes-filled rubber composite, and the effect of temperature, filling amount, MWNTs orientation and other factors on thermal performance were studied. The research results showed: MWNTs orientation can greatly improve the thermal conductivity of composite materials, the thermal performance improvement of overall orientation was higher than the local orientation. Volume fraction can affect thermal performance, thermal conductivity increased with the increase of volume fraction. Temperature had no significant effect on the thermal conductivity. The simulation results correlated well with experimental results, which showed that the simulation algorithm is effective and feasible.

  12. A global optimization algorithm for simulation-based problems via the extended DIRECT scheme

    NASA Astrophysics Data System (ADS)

    Liu, Haitao; Xu, Shengli; Wang, Xiaofang; Wu, Junnan; Song, Yang

    2015-11-01

    This article presents a global optimization algorithm via the extension of the DIviding RECTangles (DIRECT) scheme to handle problems with computationally expensive simulations efficiently. The new optimization strategy improves the regular partition scheme of DIRECT to a flexible irregular partition scheme in order to utilize information from irregular points. The metamodelling technique is introduced to work with the flexible partition scheme to speed up the convergence, which is meaningful for simulation-based problems. Comparative results on eight representative benchmark problems and an engineering application with some existing global optimization algorithms indicate that the proposed global optimization strategy is promising for simulation-based problems in terms of efficiency and accuracy.

  13. Room Acoustical Simulation Algorithm Based on the Free Path Distribution

    NASA Astrophysics Data System (ADS)

    VORLÄNDER, M.

    2000-04-01

    A new algorithm is presented which provides estimates of impulse responses in rooms. It is applicable to arbitrary shaped rooms, thus including non-diffuse spaces like workrooms or offices. In the latter cases, for instance, sound propagation curves are of interest to be applied in noise control. In the case of concert halls and opera houses, the method enables very fast predictions of room acoustical criteria like reverberation time, strength or clarity. The method is based on a low-resolved ray tracing and recording of the free paths. Estimates of impulse responses are derived from evaluation of the free path distribution and of the free path transition probabilities.

  14. The control algorithm improving performance of electric load simulator

    NASA Astrophysics Data System (ADS)

    Guo, Chenxia; Yang, Ruifeng; Zhang, Peng; Fu, Mengyao

    2017-01-01

    In order to improve dynamic performance and signal tracking accuracy of electric load simulator, the influence of the moment of inertia, stiffness, friction, gaps and other factors on the system performance were analyzed on the basis of researching the working principle of load simulator in this paper. The PID controller based on Wavelet Neural Network was used to achieve the friction nonlinear compensation, while the gap inverse model was used to compensate the gap nonlinear. The compensation results were simulated by MATLAB software. It was shown that the follow-up performance of sine response curve of the system became better after compensating, the track error was significantly reduced, the accuracy was improved greatly and the system dynamic performance was improved.

  15. A fast algorithm for the simulation of arterial pulse waves

    NASA Astrophysics Data System (ADS)

    Du, Tao; Hu, Dan; Cai, David

    2016-06-01

    One-dimensional models have been widely used in studies of the propagation of blood pulse waves in large arterial trees. Under a periodic driving of the heartbeat, traditional numerical methods, such as the Lax-Wendroff method, are employed to obtain asymptotic periodic solutions at large times. However, these methods are severely constrained by the CFL condition due to large pulse wave speed. In this work, we develop a new numerical algorithm to overcome this constraint. First, we reformulate the model system of pulse wave propagation using a set of Riemann variables and derive a new form of boundary conditions at the inlet, the outlets, and the bifurcation points of the arterial tree. The new form of the boundary conditions enables us to design a convergent iterative method to enforce the boundary conditions. Then, after exchanging the spatial and temporal coordinates of the model system, we apply the Lax-Wendroff method in the exchanged coordinate system, which turns the large pulse wave speed from a liability to a benefit, to solve the wave equation in each artery of the model arterial system. Our numerical studies show that our new algorithm is stable and can perform ∼15 times faster than the traditional implementation of the Lax-Wendroff method under the requirement that the relative numerical error of blood pressure be smaller than one percent, which is much smaller than the modeling error.

  16. An extended molecular statics algorithm simulating the electromechanical continuum response of ferroelectric materials

    NASA Astrophysics Data System (ADS)

    Endres, F.; Steinmann, P.

    2014-12-01

    Molecular dynamics (MD) simulations of ferroelectric materials have improved tremendously over the last few decades. Specifically, the core-shell model has been commonly used for the simulation of ferroelectric materials such as barium titanate. However, due to the computational costs of MD, the calculation of ferroelectric hysteresis behaviour, and especially the stress-strain relation, has been a computationally intense task. In this work a molecular statics algorithm, similar to a finite element method for nonlinear trusses, has been implemented. From this, an algorithm to calculate the stress dependent continuum deformation of a discrete particle system, such as a ferroelectric crystal, has been devised. Molecular statics algorithms for the atomistic simulation of ferroelectric materials have been previously described. However, in contrast to the prior literature the algorithm proposed in this work is also capable of effectively computing the macroscopic ferroelectric butterfly hysteresis behaviour. Therefore the advocated algorithm is able to calculate the piezoelectric effect as well as the converse piezoelectric effect simultaneously on atomistic and continuum length scales. Barium titanate has been simulated using the core-shell model to validate the developed algorithm.

  17. A fast 3D image simulation algorithm of moving target for scanning laser radar

    NASA Astrophysics Data System (ADS)

    Li, Jicheng; Shi, Zhiguang; Chen, Xiao; Chen, Dong

    2014-10-01

    Scanning Laser Radar has been widely used in many military and civil areas. Usually there are relative movements between the target and the radar, so the moving target image modeling and simulation is an important research content in the field of signal processing and system design of scan-imaging laser radar. In order to improve the simulation speed and hold the accuracy of the image simulation simultaneously, a novel fast simulation algorithm is proposed in this paper. Firstly, for moving target or varying scene, an inequation that can judge the intersection relations between the pixel and target bins is obtained by deriving the projection of target motion trajectories on the image plane. Then, by utilizing the time subdivision and approximate treatments, the potential intersection relations of pixel and target bins are determined. Finally, the goal of reducing the number of intersection operations could be achieved by testing all the potential relations and finding which of them is real intersection. To test the method's performance, we perform computer simulations of both the new proposed algorithm and a literature's algorithm for six targets. The simulation results show that the two algorithm yield the same imaging result, whereas the number of intersection operations of former is equivalent to only 1% of the latter, and the calculation efficiency increases a hundredfold. The novel simulation acceleration idea can be applied extensively in other more complex application environments and provide equally acceleration effect. It is very suitable for the case to produce a great large number of laser radar images.

  18. Interactive Computational Algorithms for Acoustic Simulation in Complex Environments

    DTIC Science & Technology

    2015-07-19

    simualtion for urban and other complex propagation environments. The PIs will also collaborate with Stephen Ketcham and Keith Wilson at USACE and...Albert, Keith Wilson, Dinesh Manocha. Validation of 3D numerical simulation for acoustic pulse propagation in an urban environment, The Journal of

  19. A process-based algorithm for simulating terraces in SWAT

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Terraces in crop fields are one of the most important soil and water conservation measures that affect runoff and erosion processes in a watershed. In large hydrological programs such as the Soil and Water Assessment Tool (SWAT), terrace effects are simulated by adjusting the slope length and the US...

  20. Simulating Multivariate Nonnormal Data Using an Iterative Algorithm

    ERIC Educational Resources Information Center

    Ruscio, John; Kaczetow, Walter

    2008-01-01

    Simulating multivariate nonnormal data with specified correlation matrices is difficult. One especially popular method is Vale and Maurelli's (1983) extension of Fleishman's (1978) polynomial transformation technique to multivariate applications. This requires the specification of distributional moments and the calculation of an intermediate…

  1. An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm

    SciTech Connect

    Donev, A; Garcia, A L; Alder, B J

    2007-07-30

    A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.

  2. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with

  3. First application of quantum annealing to IMRT beamlet intensity optimization.

    PubMed

    Nazareth, Daryl P; Spaans, Jason D

    2015-05-21

    Optimization methods are critical to radiation therapy. A new technology, quantum annealing (QA), employs novel hardware and software techniques to address various discrete optimization problems in many fields. We report on the first application of quantum annealing to the process of beamlet intensity optimization for IMRT. We apply recently-developed hardware which natively exploits quantum mechanical effects for improved optimization. The new algorithm, called QA, is most similar to simulated annealing, but relies on natural processes to directly minimize a system's free energy. A simple quantum system is slowly evolved into a classical system representing the objective function. If the evolution is sufficiently slow, there are probabilistic guarantees that a global minimum will be located. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitations. The beamlet dose matrices were computed using CERR and an objective function was defined based on typical clinical constraints, including dose-volume objectives, which result in a complex non-convex search space. The objective function was discretized and the QA method was compared to two standard optimization methods, simulated annealing and Tabu search, run on a conventional computing cluster. Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the simulated annealing (SA) method. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu and 22.9 for the SA. The QA algorithm required 27-38% of the time required by the other two methods. In this first application of hardware-enabled QA to IMRT optimization, its performance is comparable to Tabu search, but less effective than the SA in terms of final objective function values. However, its speed was 3-4 times faster than the other two methods. This

  4. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  5. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    NASA Astrophysics Data System (ADS)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-05-01

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank-Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  6. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  7. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    SciTech Connect

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  8. Parallel algorithms for simulating continuous time Markov chains

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  9. Direct dynamics simulations using Hessian-based predictor-corrector integration algorithms.

    PubMed

    Lourderaj, Upakarasamy; Song, Kihyung; Windus, Theresa L; Zhuang, Yu; Hase, William L

    2007-01-28

    In previous research [J. Chem. Phys. 111, 3800 (1999)] a Hessian-based integration algorithm was derived for performing direct dynamics simulations. In the work presented here, improvements to this algorithm are described. The algorithm has a predictor step based on a local second-order Taylor expansion of the potential in Cartesian coordinates, within a trust radius, and a fifth-order correction to this predicted trajectory. The current algorithm determines the predicted trajectory in Cartesian coordinates, instead of the instantaneous normal mode coordinates used previously, to ensure angular momentum conservation. For the previous algorithm the corrected step was evaluated in rotated Cartesian coordinates. Since the local potential expanded in Cartesian coordinates is not invariant to rotation, the constants of motion are not necessarily conserved during the corrector step. An approximate correction to this shortcoming was made by projecting translation and rotation out of the rotated coordinates. For the current algorithm unrotated Cartesian coordinates are used for the corrected step to assure the constants of motion are conserved. An algorithm is proposed for updating the trust radius to enhance the accuracy and efficiency of the numerical integration. This modified Hessian-based integration algorithm, with its new components, has been implemented into the VENUS/NWChem software package and compared with the velocity-Verlet algorithm for the H(2)CO-->H(2)+CO, O(3)+C(3)H(6), and F(-)+CH(3)OOH chemical reactions.

  10. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  11. Predicting patchy particle crystals: Variable box shape simulations and evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-01

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.

  12. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    PubMed

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  13. Confidence range estimate of extended source imagery acquisition algorithms via computer simulations. [in optical communication systems

    NASA Technical Reports Server (NTRS)

    Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret

    1992-01-01

    Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.

  14. Multi-Rate Digital Control Systems with Simulation Applications. Volume II. Computer Algorithms

    DTIC Science & Technology

    1980-09-01

    34 ~AFWAL-TR-80-31 01 • • Volume II L IL MULTI-RATE DIGITAL CONTROL SYSTEMS WITH SIMULATiON APPLICATIONS Volume II: Computer Algorithms DENNIS G. J...29 Ma -8 - Volume II. Computer Algorithms ~ / ’+ 44MWLxkQT N Uwe ~~ 4 ~jjskYIF336l5-79-C-369~ 9. PER~rORMING ORGANIZATION NAME AND ADDRESS IPROG AMEL...additional options. The analytical basis for the computer algorithms is discussed in Ref. 12. However, to provide a complete description of the program, some

  15. Parallel implementation of the FETI-DPEM algorithm for general 3D EM simulations

    NASA Astrophysics Data System (ADS)

    Li, Yu-Jia; Jin, Jian-Ming

    2009-05-01

    A parallel implementation of the electromagnetic dual-primal finite element tearing and interconnecting algorithm (FETI-DPEM) is designed for general three-dimensional (3D) electromagnetic large-scale simulations. As a domain decomposition implementation of the finite element method, the FETI-DPEM algorithm provides fully decoupled subdomain problems and an excellent numerical scalability, and thus is well suited for parallel computation. The parallel implementation of the FETI-DPEM algorithm on a distributed-memory system using the message passing interface (MPI) is discussed in detail along with a few practical guidelines obtained from numerical experiments. Numerical examples are provided to demonstrate the efficiency of the parallel implementation.

  16. A syncopated leap-frog algorithm for orbit consistent plasma simulation of materials processing reactors

    SciTech Connect

    Cobb, J.W.; Leboeuf, J.N.

    1994-10-01

    The authors present a particle algorithm to extend simulation capabilities for plasma based materials processing reactors. The orbit integrator uses a syncopated leap-frog algorithm in cylindrical coordinates, which maintains second order accuracy, and minimizes computational complexity. Plasma source terms are accumulated orbit consistently directly in the frequency and azimuthal mode domains. Finally they discuss the numerical analysis of this algorithm. Orbit consistency greatly reduces the computational cost for a given level of precision. The computational cost is independent of the degree of time scale separation.

  17. Simulating chemical energies to high precision with fully-scalable quantum algorithms on superconducting qubits

    NASA Astrophysics Data System (ADS)

    O'Malley, Peter; Babbush, Ryan; Kivlichan, Ian; Romero, Jhonathan; McClean, Jarrod; Tranter, Andrew; Barends, Rami; Kelly, Julian; Chen, Yu; Chen, Zijun; Jeffrey, Evan; Fowler, Austin; Megrant, Anthony; Mutus, Josh; Neill, Charles; Quintana, Christopher; Roushan, Pedram; Sank, Daniel; Vainsencher, Amit; Wenner, James; White, Theodore; Love, Peter; Aspuru-Guzik, Alan; Neven, Hartmut; Martinis, John

    Quantum simulations of molecules have the potential to calculate industrially-important chemical parameters beyond the reach of classical methods with relatively modest quantum resources. Recent years have seen dramatic progress both superconducting qubits and quantum chemistry algorithms. Here, we present experimental demonstrations of two fully-scalable algorithms for finding the dissociation energy of hydrogen: the variational quantum eigensolver and iterative phase estimation. This represents the first calculation of a dissociation energy to chemical accuracy with a non-precompiled algorithm. These results show the promise of chemistry as the ``killer app'' for quantum computers, even before the advent of full error-correction.

  18. Multiscale stochastic simulation algorithm with stochastic partial equilibrium assumption for chemically reacting systems

    SciTech Connect

    Cao Yang . E-mail: ycao@cs.ucsb.edu; Gillespie, Dan . E-mail: GillespieDT@mailaps.org; Petzold, Linda . E-mail: petzold@engineering.ucsb.edu

    2005-07-01

    In this paper, we introduce a multiscale stochastic simulation algorithm (MSSA) which makes use of Gillespie's stochastic simulation algorithm (SSA) together with a new stochastic formulation of the partial equilibrium assumption (PEA). This method is much more efficient than SSA alone. It works even with a very small population of fast species. Implementation details are discussed, and an application to the modeling of the heat shock response of E. Coli is presented which demonstrates the excellent efficiency and accuracy obtained with the new method.

  19. Turning Simulation into Estimation: Generalized Exchange Algorithms for Exponential Family Models

    PubMed Central

    Maris, Gunter; Bechger, Timo; Glas, Cees

    2017-01-01

    The Single Variable Exchange algorithm is based on a simple idea; any model that can be simulated can be estimated by producing draws from the posterior distribution. We build on this simple idea by framing the Exchange algorithm as a mixture of Metropolis transition kernels and propose strategies that automatically select the more efficient transition kernels. In this manner we achieve significant improvements in convergence rate and autocorrelation of the Markov chain without relying on more than being able to simulate from the model. Our focus will be on statistical models in the Exponential Family and use two simple models from educational measurement to illustrate the contribution. PMID:28076429

  20. Aggressively Parallel Algorithms of Collision and Nearest Neighbor Detection for GPU Planetesimal Disk Simulation

    NASA Astrophysics Data System (ADS)

    Quillen, Alice C.; Moore, A.

    2008-09-01

    Planetesimal and dust dynamical simulations require collision and nearest neighbor detection. A brute force implementation for sorting interparticle distances requires O(N2) computations for N particles, limiting the numbers of particles that have been simulated. Parallel algorithms recently developed for the GPU (graphics processing unit), such as the radix sort, can run as fast as O(N) and sort distances between a million particles in a few hundred milliseconds. We introduce improvements in collision and nearest neighbor detection algorithms and how we have incorporated them into our efficient parallel 2nd order democratic heliocentric method symplectic integrator written in NVIDIA's CUDA for the GPU.

  1. A sweep algorithm for massively parallel simulation of circuit-switched networks

    NASA Technical Reports Server (NTRS)

    Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.

    1992-01-01

    A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.

  2. Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms

    SciTech Connect

    Bosl, W J

    2005-01-26

    The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis

  3. Simulated annealing with restrained molecular dynamics using CONGEN: energy refinement of the NMR solution structures of epidermal and type-alpha transforming growth factors.

    PubMed Central

    Tejero, R.; Bassolino-Klimas, D.; Bruccoleri, R. E.; Montelione, G. T.

    1996-01-01

    The new functionality of the program CONGEN (Bruccoleri RE, Karplus M, 1987, Biopolymers 26:137-168; Bassolino-Klimas D et al., 1996, Protein Sci 5:593-603) has been applied for energy refinement of two previously determined solution NMR structures, murine epidermal growth factor (mEGF) and human type-alpha transforming growth factor (hTGF alpha). A summary of considerations used in converting experimental NMR data into distance constraints for CONGEN is presented. A general protocol for simulated annealing with restrained molecular dynamics is applied to generate NMR solution structures using CONGEN together with real experimental NMR data. A total of 730 NMR-derived constraints for mEGF and 424 NMR-derived constraints for hTGF alpha were used in these energy-refinement calculations. Different weighting schemes and starting conformations were studied to check and/or improve the sampling of the low-energy conformational space that is consistent with all constraints. The results demonstrate that loosened (i.e., "relaxed") sets of the EGF and hTGF alpha internuclear distance constraints allow molecules to overcome local minima in the search for a global minimum with respect to both distance restraints and conformational energy. The resulting energy-refined structures of mEGF and hTGF alpha are compared with structures determined previously and with structures of homologous proteins determined by NMR and X-ray crystallography. PMID:8845748

  4. Improvement of bio-corrosion resistance for Ti42Zr40Si15Ta3 metallic glasses in simulated body fluid by annealing within supercooled liquid region.

    PubMed

    Huang, C H; Lai, J J; Wei, T Y; Chen, Y H; Wang, X; Kuan, S Y; Huang, J C

    2015-01-01

    The effects of the nanocrystalline phases on the bio-corrosion behavior of highly bio-friendly Ti42Zr40Si15Ta3 metallic glasses in simulated body fluid were investigated, and the findings are compared with our previous observations from the Zr53Cu30Ni9Al8 metallic glasses. The Ti42Zr40Si15Ta3 metallic glasses were annealed at temperatures above the glass transition temperature, Tg, with different time periods to result in different degrees of α-Ti nano-phases in the amorphous matrix. The nanocrystallized Ti42Zr40Si15Ta3 metallic glasses containing corrosion resistant α-Ti phases exhibited more promising bio-corrosion resistance, due to the superior pitting resistance. This is distinctly different from the previous case of the Zr53Cu30Ni9Al8 metallic glasses with the reactive Zr2Cu phases inducing serious galvanic corrosion and lower bio-corrosion resistance. Thus, whether the fully amorphous or partially crystallized metallic glass would exhibit better bio-corrosion resistance, the answer would depend on the crystallized phase nature.

  5. Foam flooding reservoir simulation algorithm improvement and application

    NASA Astrophysics Data System (ADS)

    Wang, Yining; Wu, Xiaodong; Wang, Ruihe; Lai, Fengpeng; Zhang, Hanhan

    2014-05-01

    As one of the important enhanced oil recovery (EOR) technologies, Foam flooding is being used more and more widely in the oil field development. In order to describe and predict foam flooding, experts at domestic and abroad have established a number of mathematical models of foam flooding (mechanism, empirical and semi-empirical models). Empirical models require less data and apply conveniently, but the accuracy is not enough. The aggregate equilibrium model can describe foam generation, burst and coalescence by mechanism studying, but it is very difficult to accurately describe. The research considers the effects of critical water saturation, critical concentration of foaming agent and critical oil saturation on the sealing ability of foam and considers the effect of oil saturation on the resistance factor for obtaining the gas phase relative permeability and the results were amended by laboratory test, so the accuracy rate is higher. Through the reservoir development concepts simulation and field practical application, the calculation is more accurate and higher.

  6. Simulated performance of remote sensing ocean colour algorithms during the 1996 PRIME cruise

    NASA Astrophysics Data System (ADS)

    Westbrook, A. G.; Pinkerton, M. H.; Aiken, J.; Pilgrim, D. A.

    Coincident pigment and underwater radiometric data were collected during a cruise along the 20°W meridian from 60°N to 37°N in the north-eastern Atlantic Ocean as part of the Natural Environment Research Council (NERC) thematic programme: plankton reactivity in the marine environment (PRIME). These data were used to simulate the retrieval of two bio-optical variables from remotelysensed measurements of ocean colour (for example by the NASA Sea-viewing wide field-of-view sensor, SeaWiFS), using two-band semi-empirical algorithms. The variables considered were the diffuse attenuation coefficient at 490 nm, ( Kd(490), units: m -1) and the phytoplankton pigment concentration expressed as optically-weighted chlorophyll- a concentration [ Ca, units: mg m -3]. There was good agreement between the measured and the retrieved bio-optical values. Algorithms based on the PRIME data were generated to compare the performance of local algorithms (algorithms which apply to a restricted area and/or season) with global algorithms (algorithms developed on data from a wide variety of water masses). The use of local algorithms improved the average accuracy, but not the precision, of the retrievals: errors were still ±36% ( Kd) and ±117% ( Ca) using local algorithms.

  7. Ground return signal simulation and retrieval algorithm of spaceborne integrated path DIAL for CO2 measurements

    NASA Astrophysics Data System (ADS)

    Liu, Bing-Yi; Wang, Jun-Yang; Liu, Zhi-Shen

    2014-11-01

    Spaceborne integrated path differential absorption (IPDA) lidar is an active-detection system which is able to perform global CO2 measurement with high accuracy of 1ppmv at day and night over ground and clouds. To evaluate the detection performance of the system, simulation of the ground return signal and retrieval algorithm for CO2 concentration are presented in this paper. Ground return signals of spaceborne IPDA lidar under various ground surface reflectivity and atmospheric aerosol optical depths are simulated using given system parameters, standard atmosphere profiles and HITRAN database, which can be used as reference for determining system parameters. The simulated signals are further applied to the research on retrieval algorithm for CO2 concentration. The column-weighted dry air mixing ratio of CO2 denoted by XCO2 is obtained. As the deviations of XCO2 between the initial values for simulation and the results from retrieval algorithm are within the expected error ranges, it is proved that the simulation and retrieval algorithm are reliable.

  8. An adaptive multi-level simulation algorithm for stochastic biological systems.

    PubMed

    Lester, C; Yates, C A; Giles, M B; Baker, R E

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  9. An optimization method of relativistic backward wave oscillator using particle simulation and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Chen, Zaigao; Wang, Jianguo; Wang, Yue; Qiao, Hailiang; Zhang, Dianhui; Guo, Weijie

    2013-11-01

    Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.

  10. A fast sorting algorithm for a hypersonic rarefied flow particle simulation on the connection machine

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1989-01-01

    The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.

  11. Comparative Study of Algorithms for the Numerical Simulation of Lattice QCD

    SciTech Connect

    Luz, Fernando H. P.; Mendes, Tereza

    2010-11-12

    Large-scale numerical simulations are the prime method for a nonperturbative study of QCD from first principles. Although the lattice simulation of the pure-gauge (or quenched-QCD) case may be performed very efficiently on parallel machines, there are several additional difficulties in the simulation of the full-QCD case, i.e. when dynamical quark effects are taken into account. We discuss the main aspects of full-QCD simulations, describing the most common algorithms. We present a comparative analysis of performance for two versions of the hybrid Monte Carlo method (the so-called R and RHMC algorithms), as provided in the MILC software package. We consider two degenerate flavors of light quarks in the staggered formulation, having in mind the case of finite-temperature QCD.

  12. An optimization method of relativistic backward wave oscillator using particle simulation and genetic algorithms

    SciTech Connect

    Chen, Zaigao; Wang, Jianguo; Wang, Yue; Qiao, Hailiang; Zhang, Dianhui; Guo, Weijie

    2013-11-15

    Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.

  13. Simulation of a navigator algorithm for a low-cost GPS receiver

    NASA Technical Reports Server (NTRS)

    Hodge, W. F.

    1980-01-01

    The analytical structure of an existing navigator algorithm for a low cost global positioning system receiver is described in detail to facilitate its implementation on in-house digital computers and real-time simulators. The material presented includes a simulation of GPS pseudorange measurements, based on a two-body representation of the NAVSTAR spacecraft orbits, and a four component model of the receiver bias errors. A simpler test for loss of pseudorange measurements due to spacecraft shielding is also noted.

  14. Hierarchical tree algorithm for collisional N-body simulations on GRAPE

    NASA Astrophysics Data System (ADS)

    Fukushige, Toshiyuki; Kawai, Atsushi

    2016-06-01

    We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional N-body simulations, running on the GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such a combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all the particle data and also the tree node data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on the host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct N2 summation in the original Hermite scheme. For example, we can achieve about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of an N = 106 system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme.

  15. Simulation approach to charge sharing compensation algorithms with experimental cross-check

    NASA Astrophysics Data System (ADS)

    Krzyżanowska, A.; Deptuch, G.; Maj, P.; Gryboś, P.; Szczygieł, R.

    2017-03-01

    Hybrid pixel detectors for X-ray imaging, working in a single photon counting mode, find applications in a variety of fields, such as medical imaging, material science or industry. However, charge sharing, which occurs when a photon hits a detector in the area between two or four pixels, becomes more significant with decreasing pixel size. If the charge generated when a photon interacts with a detector is collected by more than one pixel, the photon energy and the event position may be improperly detected. Therefore, algorithms for minimization of the impact of charge sharing on a pixel detector for X-ray detection need to be implemented. Firstly, such algorithms must be assessed on a simulation level. The goal is to implement the simulations in such a way that the simulation accuracy and simulation time are optimized. A model should be flexible enough so that it can be quickly adapted for other uses. We propose behavioral models implemented in the Cadence® Virtuoso® environment. This is a solution that enables fast validation of the system at the higher level of abstraction allowing deep verification. A readout channel of a chip is represented using parameterized behavioral blocks of different functionality, such as, a charge sensitive amplifier, shapers, discriminators, comparators. The inter-pixel connections are taken into account. This approach enables top-down design and optimization of parameters. The model was implemented in particular to test the C8P1 algorithm used in the Chase Jr. chip, however, due to its modular implementation, it can be easily adjusted to further test of the algorithms. The simulation approach is described and the simulation results are presented together with the experimental data obtained during synchrotron measurements for the Chase Jr. chip with the C8P1 algorithm implemented.

  16. Sensitivity of CO2 Simulation in a GCM to the Convective Transport Algorithms

    NASA Technical Reports Server (NTRS)

    Zhu, Z.; Pawson, S.; Collatz, G. J.; Gregg, W. W.; Kawa, S. R.; Baker, D.; Ott, L.

    2014-01-01

    Convection plays an important role in the transport of heat, moisture and trace gases. In this study, we simulated CO2 concentrations with an atmospheric general circulation model (GCM). Three different convective transport algorithms were used. One is a modified Arakawa-Shubert scheme that was native to the GCM; two others used in two off-line chemical transport models (CTMs) were added to the GCM here for comparison purposes. Advanced CO2 surfaced fluxes were used for the simulations. The results were compared to a large quantity of CO2 observation data. We find that the simulation results are sensitive to the convective transport algorithms. Overall, the three simulations are quite realistic and similar to each other in the remote marine regions, but are significantly different in some land regions with strong fluxes such as Amazon and Siberia during the convection seasons. Large biases against CO2 measurements are found in these regions in the control run, which uses the original GCM. The simulation with the simple diffusive algorithm is better. The difference of the two simulations is related to the very different convective transport speed.

  17. Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm

    SciTech Connect

    Thanh, Vo Hong; Priami, Corrado

    2015-08-07

    We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reaction rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.

  18. R-leaping: accelerating the stochastic simulation algorithm by reaction leaps.

    PubMed

    Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros

    2006-08-28

    A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.

  19. R-leaping: Accelerating the stochastic simulation algorithm by reaction leaps

    NASA Astrophysics Data System (ADS)

    Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros

    2006-08-01

    A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.

  20. An improved algorithm of three B-spline curve interpolation and simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Wanjun; Xu, Dongmei; Meng, Xinhong; Zhang, Feng

    2017-03-01

    As a key interpolation technique in CNC system machine tool, three B-spline curve interpolator has been proposed to change the drawbacks caused by linear and circular interpolator, Such as interpolation time bigger, three B-spline curves step error are not easy changed,and so on. This paper an improved algorithm of three B-spline curve interpolation and simulation is proposed. By Using MATALAB 7.0 computer soft in three B-spline curve interpolation is developed for verifying the proposed modification algorithm of three B-spline curve interpolation experimentally. The simulation results show that the algorithm is correct; it is consistent with a three B-spline curve interpolation requirements.

  1. Semirigorous synchronous sublattice algorithm for parallel kinetic Monte Carlo simulations of thin film growth

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2005-03-01

    The standard kinetic Monte Carlo algorithm is an extremely efficient method to carry out serial simulations of dynamical processes such as thin film growth. However, in some cases it is necessary to study systems over extended time and length scales, and therefore a parallel algorithm is desired. Here we describe an efficient, semirigorous synchronous sublattice algorithm for parallel kinetic Monte Carlo simulations. The accuracy and parallel efficiency are studied as a function of diffusion rate, processor size, and number of processors for a variety of simple models of epitaxial growth. The effects of fluctuations on the parallel efficiency are also studied. Since only local communications are required, linear scaling behavior is observed, e.g., the parallel efficiency is independent of the number of processors for fixed processor size.

  2. A novel algorithm for non-bonded-list updating in molecular simulations.

    PubMed

    Maximova, Tatiana; Keasar, Chen

    2006-06-01

    Simulations of molecular systems typically handle interactions within non-bonded pairs. Generating and updating a list of these pairs can be the most time-consuming part of energy calculations for large systems. Thus, efficient non-bonded list processing can speed up the energy calculations significantly. While the asymptotic complexity of current algorithms (namely O(N), where N is the number of particles) is probably the lowest possible, a wide space for optimization is still left. This article offers a heuristic extension to the previously suggested grid based algorithms. We show that, when the average particle movements are slow, simulation time can be reduced considerably. The proposed algorithm has been implemented in the DistanceMatrix class of the molecular modeling package MESHI. MESHI is freely available at .

  3. Simulating the time-dependent Schr"odinger equation with a quantum lattice-gas algorithm

    NASA Astrophysics Data System (ADS)

    Prezkuta, Zachary; Coffey, Mark

    2007-03-01

    Quantum computing algorithms promise remarkable improvements in speed or memory for certain applications. Currently, the Type II (or hybrid) quantum computer is the most feasible to build. This consists of a large number of small Type I (pure) quantum computers that compute with quantum logic, but communicate with nearest neighbors in a classical way. The arrangement thus formed is suitable for computations that execute a quantum lattice gas algorithm (QLGA). We report QLGA simulations for both the linear and nonlinear time-dependent Schr"odinger equation. These evidence the stable, efficient, and at least second order convergent properties of the algorithm. The simulation capability provides a computational tool for applications in nonlinear optics, superconducting and superfluid materials, Bose-Einstein condensates, and elsewhere.

  4. SIMULATION OF AEROSOL DYNAMICS: A COMPARATIVE REVIEW OF ALGORITHMS USED IN AIR QUALITY MODELS

    EPA Science Inventory

    A comparative review of algorithms currently used in air quality models to simulate aerosol dynamics is presented. This review addresses coagulation, condensational growth, nucleation, and gas/particle mass transfer. Two major approaches are used in air quality models to repres...

  5. An optimization algorithm for individualized biomechanical analysis and simulation of tibia fractures.

    PubMed

    Roland, M; Tjardes, T; Otchwemah, R; Bouillon, B; Diebels, S

    2015-04-13

    An algorithmic strategy to determine the minimal fusion area of a tibia pseudarthrosis to achieve mechanical stability is presented. For this purpose, a workflow capable for implementation into clinical routine workup of tibia pseudarthrosis was developed using visual computing algorithms for image segmentation, that is a coarsening protocol to reduce computational effort resulting in an individualized volume-mesh based on computed tomography data. An algorithm detecting the minimal amount of fracture union necessary to allow physiological loading without subjecting the implant to stresses and strains that might result in implant failure is developed. The feasibility of the algorithm in terms of computational effort is demonstrated. Numerical finite element simulations show that the minimal fusion area of a tibia pseudarthrosis can be less than 90% of the full circumferential area given a defined maximal von Mises stress in the implant of 80% of the total stress arising in a complete pseudarthrosis of the tibia.

  6. Evaluation of effective-stress-function algorithm for nuclear fuel simulation

    SciTech Connect

    Kim, H. C.; Yang, Y. S.; Koo, Y. H.

    2013-07-01

    In a pressurized water reactor (PWR), the mechanical integrity of nuclear fuel is the most critical issue as it is an important barrier for fission products released into the environment. The integrity of zirconium cladding that surrounds uranium oxide can be threatened during off-normal operation owing to a pellet-cladding mechanical interaction (PCMI). To analyze the fuel and cladding behavior during off-operation, the fuel performance code should calculate an inelastic analysis in two - or three-dimensional calculations. In this paper, the effective stress function (ESF) algorithm based on a two-dimensional FE module has been implemented to simulate the inelastic behavior of the cladding with stability and accuracy. The ESF algorithm solves the governing equations of the inelastic constitutive behavior by calculating the zero of the appropriate effective-stress-function. To verify the accuracy of the ESF algorithm for an inelastic analysis, a code-to-code benchmark was performed using the commercial FE code, ANSYS 13.0. To demonstrate the stability and convergence of the implemented algorithm, the number of iterations in the ESF algorithm was compared with that in a sequential algorithm in the case of an inelastic problem. Consequently, the evaluation results demonstrate that the implemented ESF algorithm improves the efficiency of the computation without a loss of accuracy for an inelastic analysis. (authors)

  7. An Algorithm for Interactive Modeling of Space-Transportation Engine Simulations: A Constraint Satisfaction Approach

    NASA Technical Reports Server (NTRS)

    Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara

    2001-01-01

    In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.

  8. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    PubMed Central

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  9. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  10. Determination of three-dimensional structures of proteins by simulated annealing with interproton distance restraints. Application to crambin, potato carboxypeptidase inhibitor and barley serine proteinase inhibitor 2.

    PubMed

    Nilges, M; Gronenborn, A M; Brünger, A T; Clore, G M

    1988-04-01

    An automated method, based on the principle of simulated annealing, is presented for determining the three-dimensional structures of proteins on the basis of short (less than 5 A) interproton distance data derived from nuclear Overhauser enhancement (NOE) measurements. The method makes use of Newton's equations of motion to increase temporarily the temperature of the system in order to search for the global minimum region of a target function comprising purely geometric restraints. These consist of interproton distances supplemented by bond lengths, bond angles, planes and soft van der Waals repulsion terms. The latter replace the dihedral, van der Waals, electrostatic and hydrogen-bonding potentials of the empirical energy function used in molecular dynamics simulations. The method presented involves the implementation of a number of innovations over our previous restrained molecular dynamics approach [Clore, G.M., Brünger, A.T., Karplus, M. and Gronenborn, A.M. (1986) J. Mol. Biol., 191, 523-551]. These include the development of a new effective potential for the interproton distance restraints whose functional form is dependent on the magnitude of the difference between calculated and target values, and the design and implementation of robust and fully automatic protocol. The method is tested on three systems: the model system crambin (46 residues) using X-ray structure derived interproton distance restraints, and potato carboxypeptidase inhibitor (CPI; 39 residues) and barley serine proteinase inhibitor 2 (BSPI-2; 64 residues) using experimentally derived interproton distance restraints. Calculations were carried out starting from the extended strands which had atomic r.m.s. differences of 57, 38 and 33 A with respect to the crystal structures of BSPI-2, crambin and CPI respectively. Unbiased sampling of the conformational space consistent with the restraints was achieved by varying the random number seed used to assign the initial velocities. This ensures

  11. Algorithm for Building a Spectrum for NREL's One-Sun Multi-Source Simulator: Preprint

    SciTech Connect

    Moriarty, T.; Emery, K.; Jablonski, J.

    2012-06-01

    Historically, the tools used at NREL to compensate for the difference between a reference spectrum and a simulator spectrum have been well-matched reference cells and the application of a calculated spectral mismatch correction factor, M. This paper describes the algorithm for adjusting the spectrum of a 9-channel fiber-optic-based solar simulator with a uniform beam size of 9 cm square at 1-sun. The combination of this algorithm and the One-Sun Multi-Source Simulator (OSMSS) hardware reduces NREL's current vs. voltage measurement time for a typical three-junction device from man-days to man-minutes. These time savings may be significantly greater for devices with more junctions.

  12. Space-based Doppler lidar sampling strategies: Algorithm development and simulated observation experiments

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D.; Wood, S. A.; Morris, M.

    1990-01-01

    Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.

  13. Direct Dynamics Simulations using Hessian-based Predictor-corrector Integration Algorithms

    SciTech Connect

    Lourderaj, Upakarasamy; Song, Kihyung; Windus, Theresa L; Zhuang, Yu; Hase, William L

    2007-01-29

    The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. In previous research (J. Chem. Phys. 111, 3800 (1999)) a Hessian-based integration algorithm was derived for performing direct dynamics simulations. In the work presented here, improvements to this algorithm are described. The algorithm has a predictor step based on a local second-order Taylor expansion of the potential in Cartesian coordinates, within a trust radius, and a fifth-order correction to this predicted trajectory. The current algorithm determines the predicted trajectory in Cartesian coordinates, instead of the instantaneous normal mode coordinates used previously, to ensure angular momentum conservation. For the previous algorithm the corrected step was evaluated in rotated Cartesian coordinates. Since the local potential expanded in Cartesian coordinates is not invariant to rotation, the constants of motion are not necessarily conserved during the corrector step. An approximate correction to this shortcoming was made by projecting translation and rotation out of the rotated coordinates. For the current algorithm unrotated Cartesian coordinates are used for the corrected step to assure the constants of motion are conserved. An algorithm is proposed for updating the trust radius to enhance the accuracy and efficiency of the numerical integration. This modified Hessian-based integration algorithm, with its new components, has been implemented into the VENUS/NWChem software package and compared with the velocity-Verlet algorithm for the H₂CO→H₂+CO, O₃+C₃H₆, and F-+CH₃OOH chemical reactions.

  14. High-Dose Rate Brachytherapy Using Inverse Planning Simulated Annealing for Locoregionally Advanced Cervical Cancer: A Clinical Report With 2-Year Follow-Up

    SciTech Connect

    Kim, Daniel H.; Wang-Chesebro, Alice; Weinberg, Vivian; Pouliot, Jean; Chen, Lee-May; Speight, Joycelyn; Littell, Ramey; Hsu, I.-Chow

    2009-12-01

    Purpose: We present clinical outcomes of image-guided brachytherapy using inverse planning simulated annealing (IPSA) planned high-dose rate (HDR) brachytherapy boost for locoregionally advanced cervical cancer. Methods and Materials: From February 2004 through December 2006, 51 patients were treated at the University of California, San Francisco with HDR brachytherapy boost as part of definitive radiation for International Federation of Gynecology and Obstetrics Stage IB1 to Stage IVA cervical cancer. Of the patients, 46 received concurrent chemotherapy, 43 with cisplatin alone and 3 with cisplatin/5-fluorouracil. All patients had IPSA-planned HDR brachytherapy boost after whole-pelvis external radiation to a total tumor dose of 85 Gy or greater (for alpha/beta = 10). Toxicities are reported according to National Cancer Institute CTCAE v3.0 (Common Terminology Criteria for Adverse Events version 3.0) guidelines. Results: At a median follow-up of 24.3 months, there were no toxicities of Grade 4 or greater and the frequencies of Grade 3 acute and late toxicities were 4% and 2%, respectively. The proportion of patients having Grade 1 or 2 gastrointestinal and genitourinary acute toxicities was 48% and 52%, respectively. Low-grade late toxicities included Grade 1 or 2 vaginal, gastrointestinal, and hormonal toxicities in 31%, 18%, and 4% of patients, respectively. During the follow-up period, local recurrence developed in 2 patients, regional recurrence developed in 2, and new distant metastases developed in 15. The rates of locoregional control of disease and overall survival at 24 months were 91% and 86%, respectively. Conclusions: Definitive radiation by use of inverse planned HDR brachytherapy boost for locoregionally advanced cervical cancer is well tolerated and achieves excellent local control of disease.

  15. Non-linear modeling of 1H NMR metabonomic data using kernel-based orthogonal projections to latent structures optimized by simulated annealing.

    PubMed

    Fonville, Judith M; Bylesjö, Max; Coen, Muireann; Nicholson, Jeremy K; Holmes, Elaine; Lindon, John C; Rantalainen, Mattias

    2011-10-31

    Linear multivariate projection methods are frequently applied for predictive modeling of spectroscopic data in metabonomic studies. The OPLS method is a commonly used computational procedure for characterizing spectral metabonomic data, largely due to its favorable model interpretation properties providing separate descriptions of predictive variation and response-orthogonal structured noise. However, when the relationship between descriptor variables and the response is non-linear, conventional linear models will perform sub-optimally. In this study we have evaluated to what extent a non-linear model, kernel-based orthogonal projections to latent structures (K-OPLS), can provide enhanced predictive performance compared to the linear OPLS model. Just like its linear counterpart, K-OPLS provides separate model components for predictive variation and response-orthogonal structured noise. The improved model interpretation by this separate modeling is a property unique to K-OPLS in comparison to other kernel-based models. Simulated annealing (SA) was used for effective and automated optimization of the kernel-function parameter in K-OPLS (SA-K-OPLS). Our results reveal that the non-linear K-OPLS model provides improved prediction performance in three separate metabonomic data sets compared to the linear OPLS model. We also demonstrate how response-orthogonal K-OPLS components provide valuable biological interpretation of model and data. The metabonomic data sets were acquired using proton Nuclear Magnetic Resonance (NMR) spectroscopy, and include a study of the liver toxin galactosamine, a study of the nephrotoxin mercuric chloride and a study of Trypanosoma brucei brucei infection. Automated and user-friendly procedures for the kernel-optimization have been incorporated into version 1.1.1 of the freely available K-OPLS software package for both R and Matlab to enable easy application of K-OPLS for non-linear prediction modeling.

  16. Automated 3D-2D registration of X-ray microcomputed tomography with histological sections for dental implants in bone using chamfer matching and simulated annealing.

    PubMed

    Becker, Kathrin; Stauber, Martin; Schwarz, Frank; Beißbarth, Tim

    2015-09-01

    We propose a novel 3D-2D registration approach for micro-computed tomography (μCT) and histology (HI), constructed for dental implant biopsies, that finds the position and normal vector of the oblique slice from μCT that corresponds to HI. During image pre-processing, the implants and the bone tissue are segmented using a combination of thresholding, morphological filters and component labeling. After this, chamfer matching is employed to register the implant edges and fine registration of the bone tissues is achieved using simulated annealing. The method was tested on n=10 biopsies, obtained at 20 weeks after non-submerged healing in the canine mandible. The specimens were scanned with μCT 100 and processed for hard tissue sectioning. After registration, we assessed the agreement of bone to implant contact (BIC) using automated and manual measurements. Statistical analysis was conducted to test the agreement of the BIC measurements in the registered samples. Registration was successful for all specimens and agreement of the respective binary images was high (median: 0.90, 1.-3. Qu.: 0.89-0.91). Direct comparison of BIC yielded that automated (median 0.82, 1.-3. Qu.: 0.75-0.85) and manual (median 0.61, 1.-3. Qu.: 0.52-0.67) measures from μCT were significant positively correlated with HI (median 0.65, 1.-3. Qu.: 0.59-0.72) between μCT and HI groups (manual: R(2)=0.87, automated: R(2)=0.75, p<0.001). The results show that this method yields promising results and that μCT may become a valid alternative to assess osseointegration in three dimensions.

  17. Mesoscale Benchmark Demonstration Problem 1: Mesoscale Simulations of Intra-granular Fission Gas Bubbles in UO2 under Post-irradiation Thermal Annealing

    SciTech Connect

    Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David

    2012-04-11

    A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling

  18. The Geometric Cluster Algorithm: Rejection-Free Monte Carlo Simulation of Complex Fluids

    NASA Astrophysics Data System (ADS)

    Luijten, Erik

    2005-03-01

    The study of complex fluids is an area of intense research activity, in which exciting and counter-intuitive behavior continue to be uncovered. Ironically, one of the very factors responsible for such interesting properties, namely the presence of multiple relevant time and length scales, often greatly complicates accurate theoretical calculations and computer simulations that could explain the observations. We have recently developed a new Monte Carlo simulation methodootnotetextJ. Liu and E. Luijten, Phys. Rev. Lett.92, 035504 (2004); see also Physics Today, March 2004, pp. 25--27. that overcomes this problem for several classes of complex fluids. Our approach can accelerate simulations by orders of magnitude by introducing nonlocal, collective moves of the constituents. Strikingly, these cluster Monte Carlo moves are proposed in such a manner that the algorithm is rejection-free. The identification of the clusters is based upon geometric symmetries and can be considered as the off-latice generalization of the widely-used Swendsen--Wang and Wolff algorithms for lattice spin models. While phrased originally for complex fluids that are governed by the Boltzmann distribution, the geometric cluster algorithm can be used to efficiently sample configurations from an arbitrary underlying distribution function and may thus be applied in a variety of other areas. In addition, I will briefly discuss various extensions of the original algorithm, including methods to influence the size of the clusters that are generated and ways to introduce density fluctuations.

  19. Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS

    NASA Astrophysics Data System (ADS)

    Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.

    Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.

  20. The small-voxel tracking algorithm for simulating chemical reactions among diffusing molecules

    NASA Astrophysics Data System (ADS)

    Gillespie, Daniel T.; Seitaridou, Effrosyni; Gillespie, Carol A.

    2014-12-01

    Simulating the evolution of a chemically reacting system using the bimolecular propensity function, as is done by the stochastic simulation algorithm and its reaction-diffusion extension, entails making statistically inspired guesses as to where the reactant molecules are at any given time. Those guesses will be physically justified if the system is dilute and well-mixed in the reactant molecules. Otherwise, an accurate simulation will require the extra effort and expense of keeping track of the positions of the reactant molecules as the system evolves. One molecule-tracking algorithm that pays careful attention to the physics of molecular diffusion is the enhanced Green's function reaction dynamics (eGFRD) of Takahashi, Tănase-Nicola, and ten Wolde [Proc. Natl. Acad. Sci. U.S.A. 107, 2473 (2010)]. We introduce here a molecule-tracking algorithm that has the same theoretical underpinnings and strategic aims as eGFRD, but a different implementation procedure. Called the small-voxel tracking algorithm (SVTA), it combines the well known voxel-hopping method for simulating molecular diffusion with a novel procedure for rectifying the unphysical predictions of the diffusion equation on the small spatiotemporal scale of molecular collisions. Indications are that the SVTA might be more computationally efficient than eGFRD for the problematic class of non-dilute systems. A widely applicable, user-friendly software implementation of the SVTA has yet to be developed, but we exhibit some simple examples which show that the algorithm is computationally feasible and gives plausible results.

  1. Algorithm for loading shot noise microbunching in multi-dimensional, free-electron laser simulation codes

    SciTech Connect

    Fawley, William M.

    2002-03-25

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  2. Algorithm for loading shot noise microbunching in multidimensional, free-electron laser simulation codes

    NASA Astrophysics Data System (ADS)

    Fawley, William M.

    2002-07-01

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multidimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  3. Monte Carlo algorithm for efficient simulation of time-resolved fluorescence in layered turbid media.

    PubMed

    Liebert, A; Wabnitz, H; Zołek, N; Macdonald, R

    2008-08-18

    We present an efficient Monte Carlo algorithm for simulation of time-resolved fluorescence in a layered turbid medium. It is based on the propagation of excitation and fluorescence photon bundles and the assumption of equal reduced scattering coefficients at the excitation and emission wavelengths. In addition to distributions of times of arrival of fluorescence photons at the detector, 3-D spatial generation probabilities were calculated. The algorithm was validated by comparison with the analytical solution of the diffusion equation for time-resolved fluorescence from a homogeneous semi-infinite turbid medium. It was applied to a two-layered model mimicking intra- and extracerebral compartments of the adult human head.

  4. A clustering method of Chinese medicine prescriptions based on modified firefly algorithm.

    PubMed

    Yuan, Feng; Liu, Hong; Chen, Shou-Qiang; Xu, Liang

    2016-12-01

    This paper is aimed to study the clustering method for Chinese medicine (CM) medical cases. The traditional K-means clustering algorithm had shortcomings such as dependence of results on the selection of initial value, trapping in local optimum when processing prescriptions form CM medical cases. Therefore, a new clustering method based on the collaboration of firefly algorithm and simulated annealing algorithm was proposed. This algorithm dynamically determined the iteration of firefly algorithm and simulates sampling of annealing algorithm by fitness changes, and increased the diversity of swarm through expansion of the scope of the sudden jump, thereby effectively avoiding premature problem. The results from confirmatory experiments for CM medical cases suggested that, comparing with traditional K-means clustering algorithms, this method was greatly improved in the individual diversity and the obtained clustering results, the computing results from this method had a certain reference value for cluster analysis on CM prescriptions.

  5. Simulated tempering based on global balance or detailed balance conditions: Suwa-Todo, heat bath, and Metropolis algorithms.

    PubMed

    Mori, Yoshiharu; Okumura, Hisashi

    2015-12-05

    Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm.

  6. Three-dimensional Stochastic Estimation of Porosity Distribution: Benefits of Using Ground-penetrating Radar Velocity Tomograms in Simulated-annealing-based or Bayesian Sequential Simulation Approaches

    DTIC Science & Technology

    2012-05-30

    crosshole seismic tomography and borehole logging information. Bayesian approaches [e.g., Gelman et al., 2003] have been applied to integrate diverse...simulation [e.g., Deutsch and Journel, 1998] with the added use of Bayesian formula [e.g., Chen et al., 2001; Gelman et al., 2003]. The Bayesian...3-D STOCHASTIC ESTIMATION OF POROSITY W05553 12 of 13 Gelman , A., J. B. Carlin, H. S. Stern, and D. B. Rubin (2003), Bayesian Data Analysis, 668 pp

  7. Algorithm for simulation of quantum many-body dynamics using dynamical coarse-graining

    SciTech Connect

    Khasin, M.; Kosloff, R.

    2010-04-15

    An algorithm for simulation of quantum many-body dynamics having su(2) spectrum-generating algebra is developed. The algorithm is based on the idea of dynamical coarse-graining. The original unitary dynamics of the target observables--the elements of the spectrum-generating algebra--is simulated by a surrogate open-system dynamics, which can be interpreted as weak measurement of the target observables, performed on the evolving system. The open-system state can be represented by a mixture of pure states, localized in the phase space. The localization reduces the scaling of the computational resources with the Hilbert-space dimension n by factor n{sup 3/2}(ln n){sup -1} compared to conventional sparse-matrix methods. The guidelines for the choice of parameters for the simulation are presented and the scaling of the computational resources with the Hilbert-space dimension of the system is estimated. The algorithm is applied to the simulation of the dynamics of systems of 2x10{sup 4} and 2x10{sup 6} cold atoms in a double-well trap, described by the two-site Bose-Hubbard model.

  8. Accelerating the Gillespie Exact Stochastic Simulation Algorithm using hybrid parallel execution on graphics processing units.

    PubMed

    Komarov, Ivan; D'Souza, Roshan M

    2012-01-01

    The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.

  9. A smooth particle-mesh Ewald algorithm for Stokes suspension simulations: The sedimentation of fibers

    NASA Astrophysics Data System (ADS)

    Saintillan, David; Darve, Eric; Shaqfeh, Eric S. G.

    2005-03-01

    Large-scale simulations of non-Brownian rigid fibers sedimenting under gravity at zero Reynolds number have been performed using a fast algorithm. The mathematical formulation follows the previous simulations by Butler and Shaqfeh ["Dynamic simulations of the inhomogeneous sedimentation of rigid fibres," J. Fluid Mech. 468, 205 (2002)]. The motion of the fibers is described using slender-body theory, and the line distribution of point forces along their lengths is approximated by a Legendre polynomial in which only the total force, torque, and particle stresslet are retained. Periodic boundary conditions are used to simulate an infinite suspension, and both far-field hydrodynamic interactions and short-range lubrication forces are considered in all simulations. The calculation of the hydrodynamic interactions, which is typically the bottleneck for large systems with periodic boundary conditions, is accelerated using a smooth particle-mesh Ewald (SPME) algorithm previously used in molecular dynamics simulations. In SPME the slowly decaying Green's function is split into two fast-converging sums: the first involves the distribution of point forces and accounts for the singular short-range part of the interactions, while the second is expressed in terms of the Fourier transform of the force distribution and accounts for the smooth and long-range part. Because of its smoothness, the second sum can be computed efficiently on an underlying grid using the fast Fourier transform algorithm, resulting in a significant speed-up of the calculations. Systems of up to 512 fibers were simulated on a single-processor workstation, providing a different insight into the formation, structure, and dynamics of the inhomogeneities that occur in sedimenting fiber suspensions.

  10. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  11. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    SciTech Connect

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matching algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.

  12. Multi-frequency Imaging Algorithms and Simulation of Space VLBI Using the VLA

    NASA Astrophysics Data System (ADS)

    Likhachev, S.; Kogan, L.; Fomalont, E.; Owen, F.

    2009-08-01

    New Multi-Frequency Synthesis (MFS) algorithms were developed and implemented in the Astro Space Locator (ASL) operating under MS Windows system. In November 2005 multi-frequency VLA observations of the radio source M87 were carried out at the following frequencies: 14.7, 15.2 21.3, 22.2, 23.0, and 23.4 GHz. We used the new MFS algorithms to determine the structure of M87 at the central frequency (19 GHz) and obtained both the image and spectral index map of the source. Comparison with more straight-forward imaging techniques (with single frequency images) shows that the new MFS algorithms increase the fidelity of the image by at least a factor of two and provides accurate spectral indices across the emission. Application to simulated Radioastron data is also shown.

  13. Implementation of a combined algorithm designed to increase the reliability of information systems: simulation modeling

    NASA Astrophysics Data System (ADS)

    Popov, A.; Zolotarev, V.; Bychkov, S.

    2016-11-01

    This paper examines the results of experimental studies of a previously submitted combined algorithm designed to increase the reliability of information systems. The data that illustrates the organization and conduct of the studies is provided. Within the framework of a comparison of As a part of the study conducted, the comparison of the experimental data of simulation modeling and the data of the functioning of the real information system was made. The hypothesis of the homogeneity of the logical structure of the information systems was formulated, thus enabling to reconfigure the algorithm presented, - more specifically, to transform it into the model for the analysis and prediction of arbitrary information systems. The results presented can be used for further research in this direction. The data of the opportunity to predict the functioning of the information systems can be used for strategic and economic planning. The algorithm can be used as a means for providing information security.

  14. GPU-based single-cluster algorithm for the simulation of the Ising model

    NASA Astrophysics Data System (ADS)

    Komura, Yukihiro; Okabe, Yutaka

    2012-02-01

    We present the GPU calculation with the common unified device architecture (CUDA) for the Wolff single-cluster algorithm of the Ising model. Proposing an algorithm for a quasi-block synchronization, we realize the Wolff single-cluster Monte Carlo simulation with CUDA. We perform parallel computations for the newly added spins in the growing cluster. As a result, the GPU calculation speed for the two-dimensional Ising model at the critical temperature with the linear size L = 4096 is 5.60 times as fast as the calculation speed on a current CPU core. For the three-dimensional Ising model with the linear size L = 256, the GPU calculation speed is 7.90 times as fast as the CPU calculation speed. The idea of quasi-block synchronization can be used not only in the cluster algorithm but also in many fields where the synchronization of all threads is required.

  15. A parallel algorithm for channel routing on a hypercube

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall; Banerjee, Prithviraj

    1987-01-01

    A new parallel simulated annealing algorithm for channel routing on a P processor hypercube is presented. The basic idea used is to partition a set of tracks equally among processors in the hypercube. In parallel, P/2 pairs of processors perform displacements and exchanges of nets between tracks, compute the changes in cost functions, and accept moves using a parallel annealing criteria. Through the use of a unique distributed data structure, it is possible to minimize message traffic and add versatility and efficiency in a parallel routing tool. The algorithm has been implemented and is being tested on some of the popular channel problems from the literature.

  16. Superheating, melting, and annealing of copper surfaces

    SciTech Connect

    Hakkinen, H.; Landman, U. )

    1993-08-16

    Dynamics of superheating, melting, and annealing processes at Cu(111) and Cu(110) surfaces, induced by laser-pulse irradiation, are investigated using molecular dynamics simulations, incorporating energy transfer from the electronic to the ionic degrees of freedom. Superheating occurs at Cu(111) for conditions that lead to melting of the Cu(110) surface. Highly damaged Cu(111) surfaces structurally anneal under the influence of a superheating pulse.

  17. A comparison of various algorithms to extract Magic Formula tyre model coefficients for vehicle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.

    2015-02-01

    Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.

  18. Optimized simulations of Olami-Feder-Christensen systems using parallel algorithms

    NASA Astrophysics Data System (ADS)

    Dominguez, Rachele; Necaise, Rance; Montag, Eric

    The sequential nature of the Olami-Feder-Christensen (OFC) model for earthquake simulations limits the benefits of parallel computing approaches because of the frequent communication required between processors. We developed a parallel version of the OFC algorithm for multi-core processors. Our data, even for relatively small system sizes and low numbers of processors, indicates that increasing the number of processors provides significantly faster simulations; producing more efficient results than previous attempts that used network-based Beowulf clusters. Our algorithm optimizes performance by exploiting the multi-core processor architecture, minimizing communication time in contrast to the networked Beowulf-cluster approaches. Our multi-core algorithm is the basis for a new algorithm using GPUs that will drastically increase the number of processors available. Previous studies incorporating realistic structural features of faults into OFC models have revealed spatial and temporal patterns observed in real earthquake systems. The computational advances presented here will allow for studying interacting networks of faults, rather than individual faults, further enhancing our understanding of the relationship between the earth's structure and the triggering process. Support for this project comes from the Chenery Research Fund, the Rashkind Family Endowment, the Walter Williams Craigie Teaching Endowment, and the Schapiro Undergraduate Research Fellowship.

  19. The Separatrix Algorithm for synthesis and analysis of stochastic simulations with applications in disease modeling.

    PubMed

    Klein, Daniel J; Baym, Michael; Eckhoff, Philip

    2014-01-01

    Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by [Formula: see text]), stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which "success" is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%). Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria.

  20. New exclusive CHIPS-TPT algorithms for simulation of neutron-nuclear reactions

    NASA Astrophysics Data System (ADS)

    Kosov, M.; Savin, D.

    2015-05-01

    The CHIPS-TPT physics library for simulation of neutron-nuclear reactions on the new exclusive level is being developed in CFAR VNIIA. The exclusive modeling conserves energy, momentum and quantum numbers in each neutron-nuclear interaction. The CHIPS-TPT algorithms are based on the exclusive CHIPS library, which is compatible with Geant4. Special CHIPS-TPT physics lists in the Geant4 format are provided. The calculation time for an exclusive CHIPS-TPT simulation is comparable to the time of the corresponding Geant4- HP simulation. In addition to the reduction of the deposited energy fluctuations, which is a consequence of the energy conservation, the CHIPS-TPT libraries provide a possibility of simulation of the secondary particles correlation, e.g. secondary gammas, and of the Doppler broadening of gamma lines in the spectrum, which can be measured by germanium detectors.

  1. A Fourier analysis for a fast simulation algorithm. [for switching converters

    NASA Technical Reports Server (NTRS)

    King, Roger J.

    1988-01-01

    This paper presents a derivation of compact expressions for the Fourier series analysis of the steady-state solution of a typical switching converter. The modeling procedure for the simulation and the steady-state solution is described, and some desirable traits for its matrix exponential subroutine are discussed. The Fourier analysis algorithm was tested on a phase-controlled parallel-loaded resonant converter, providing an experimental confirmation.

  2. Effective algorithm for ray-tracing simulations of lobster eye and similar reflective optical systems

    NASA Astrophysics Data System (ADS)

    Tichý, Vladimír; Hudec, René; Němcová, Šárka

    2016-06-01

    The algorithm presented is intended mainly for lobster eye optics. This type of optics (and some similar types) allows for a simplification of the classical ray-tracing procedure that requires great many rays to simulate. The method presented performs the simulation of a only few rays; therefore it is extremely effective. Moreover, to simplify the equations, a specific mathematical formalism is used. Only a few simple equations are used, therefore the program code can be simple as well. The paper also outlines how to apply the method to some other reflective optical systems.

  3. A novel Monte Carlo algorithm for simulating crystals with McStas

    NASA Astrophysics Data System (ADS)

    Alianelli, L.; Sánchez del Río, M.; Felici, R.; Andersen, K. H.; Farhi, E.

    2004-07-01

    We developed an original Monte Carlo algorithm for the simulation of Bragg diffraction by mosaic, bent and gradient crystals. It has practical applications, as it can be used for simulating imperfect crystals (monochromators, analyzers and perhaps samples) in neutron ray-tracing packages, like McStas. The code we describe here provides a detailed description of the particle interaction with the microscopic homogeneous regions composing the crystal, therefore it can be used also for the calculation of quantities having a conceptual interest, as multiple scattering, or for the interpretation of experiments aiming at characterizing crystals, like diffraction topographs.

  4. A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.

    2005-01-01

    This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.

  5. Validation of an algorithm for delay stochastic simulation of transcription and translation in prokaryotic gene expression

    NASA Astrophysics Data System (ADS)

    Roussel, Marc R.; Zhu, Rui

    2006-12-01

    The quantitative modeling of gene transcription and translation requires a treatment of two key features: stochastic fluctuations due to the limited copy numbers of key molecules (genes, RNA polymerases, ribosomes), and delayed output due to the time required for biopolymer synthesis. Recently proposed algorithms allow for efficient simulations of such systems. However, it is critical to know whether the results of delay stochastic simulations agree with those from more detailed models of the transcription and translation processes. We present a generalization of previous delay stochastic simulation algorithms which allows both for multiple delays and for distributions of delay times. We show that delay stochastic simulations closely approximate simulations of a detailed transcription model except when two-body effects (e.g. collisions between polymerases on a template strand) are important. Finally, we study a delay stochastic model of prokaryotic transcription and translation which reproduces observations from a recent experimental study in which a single gene was expressed under the control of a repressed lac promoter in E. coli cells. This demonstrates our ability to quantitatively model gene expression using these new methods.

  6. Simulation and optimization of a pulsating heat pipe using artificial neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad

    2016-11-01

    In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.

  7. Blocking Moving Window algorithm: Conditioning multiple-point simulations to hydrogeological data

    NASA Astrophysics Data System (ADS)

    Alcolea, Andres; Renard, Philippe

    2010-08-01

    Connectivity constraints and measurements of state variables contain valuable information on aquifer architecture. Multiple-point (MP) geostatistics allow one to simulate aquifer architectures, presenting a predefined degree of global connectivity. In this context, connectivity data are often disregarded. The conditioning to state variables is usually carried out by minimizing a suitable objective function (i.e., solving an inverse problem). However, the discontinuous nature of lithofacies distributions and of the corresponding objective function discourages the use of traditional sensitivity-based inversion techniques. This work presents the Blocking Moving Window algorithm (BMW), aimed at overcoming these limitations by conditioning MP simulations to hydrogeological data such as connectivity and heads. The BMW evolves iteratively until convergence: (1) MP simulation of lithofacies from geological/geophysical data and connectivity constraints, where only a random portion of the domain is simulated at every iteration (i.e., the blocking moving window, whose size is user-defined); (2) population of hydraulic properties at the intrafacies; (3) simulation of state variables; and (4) acceptance or rejection of the MP simulation depending on the quality of the fit of measured state variables. The outcome is a stack of MP simulations that (1) resemble a prior geological model depicted by a training image, (2) honor lithological data and connectivity constraints, (3) correlate with geophysical data, and (4) fit available measurements of state variables well. We analyze the performance of the algorithm on a 2-D synthetic example. Results show that (1) the size of the blocking moving window controls the behavior of the BMW, (2) conditioning to state variable data enhances dramatically the initial simulation (which accounts for geological/geophysical data only), and (3) connectivity constraints speed up the convergence but do not enhance the stack if the number of iterations

  8. Real-time dynamics simulation of the Cassini spacecraft using DARTS. Part 1: Functional capabilities and the spatial algebra algorithm

    NASA Technical Reports Server (NTRS)

    Jain, A.; Man, G. K.

    1993-01-01

    This paper describes the Dynamics Algorithms for Real-Time Simulation (DARTS) real-time hardware-in-the-loop dynamics simulator for the National Aeronautics and Space Administration's Cassini spacecraft. The spacecraft model consists of a central flexible body with a number of articulated rigid-body appendages. The demanding performance requirements from the spacecraft control system require the use of a high fidelity simulator for control system design and testing. The DARTS algorithm provides a new algorithmic and hardware approach to the solution of this hardware-in-the-loop simulation problem. It is based upon the efficient spatial algebra dynamics for flexible multibody systems. A parallel and vectorized version of this algorithm is implemented on a low-cost, multiprocessor computer to meet the simulation timing requirements.

  9. A fast algorithm for voxel-based deterministic simulation of X-ray imaging

    NASA Astrophysics Data System (ADS)

    Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2008-04-01

    Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video

  10. Effects of Temperature Control Algorithms on Transport Properties and Kinetics in Molecular Dynamics Simulations.

    PubMed

    Basconi, Joseph E; Shirts, Michael R

    2013-07-09

    Temperature control algorithms in molecular dynamics (MD) simulations are necessary to study isothermal systems. However, these thermostatting algorithms alter the velocities of the particles and thus modify the dynamics of the system with respect to the microcanonical ensemble, which could potentially lead to thermostat-dependent dynamical artifacts. In this study, we investigate how six well-established thermostat algorithms applied with different coupling strengths and to different degrees of freedom affect the dynamics of various molecular systems. We consider dynamic processes occurring on different times scales by measuring translational and rotational self-diffusion as well as the shear viscosity of water, diffusion of a small molecule solvated in water, and diffusion and the dynamic structure factor of a polymer chain in water. All of these properties are significantly dampened by thermostat algorithms which randomize particle velocities, such as the Andersen thermostat and Langevin dynamics, when strong coupling is used. For the solvated small molecule and polymer, these dampening effects are reduced somewhat if the thermostats are applied to the solvent alone, such that the solute's temperature is maintained only through thermal contact with solvent particles. Algorithms which operate by scaling the velocities, such as the Berendsen thermostat, the stochastic velocity rescaling approach of Bussi and co-workers, and the Nosé-Hoover thermostat, yield transport properties that are statistically indistinguishable from those of the microcanonical ensemble, provided they are applied globally, i.e. coupled to the system's kinetic energy. When coupled to local kinetic energies, a velocity scaling thermostat can have dampening effects comparable to a velocity randomizing method, as we observe when a massive Nose-Hoover coupling scheme is used to simulate water. Correct dynamical properties, at least those studied in this paper, are obtained with the Berendsen

  11. The diffusive finite state projection algorithm for efficient simulation of the stochastic reaction-diffusion master equation

    PubMed Central

    Drawert, Brian; Lawson, Michael J.; Petzold, Linda; Khammash, Mustafa

    2010-01-01

    We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm. PMID:20170209

  12. Comparison of simulated quenching algorithms for design of diffractive optical elements.

    PubMed

    Liu, J S; Caley, A J; Waddie, A J; Taghizadeh, M R

    2008-02-20

    We compare the performance of very fast simulated quenching; generalized simulated quenching, which unifies classical Boltzmann simulated quenching and Cauchy fast simulated quenching; and variable step size simulated quenching. The comparison is carried out by applying these algorithms to the design of diffractive optical elements for beam shaping of monochromatic, spatially incoherent light to a tightly focused image spot, whose central lobe should be smaller than the geometrical-optics limit. For generalized simulated quenching we choose values of visiting and acceptance shape parameters recommended by other investigators and use both a one-dimensional and a multidimensional Tsallis random number generator. We find that, under our test conditions, variable step size simulated quenching, which generates each parameter's new states based on the acceptance ratio instead of a certain theoretical probability distribution, produces the best results. Finally, we demonstrate experimentally a tightly focused image spot, with a central lobe 0.22-0.68 times the geometrical-optics limit and a relative sidelobe intensity 55%-60% that of the central maximum intensity.

  13. Parallel-vector algorithms for particle simulations on shared-memory multiprocessors

    SciTech Connect

    Nishiura, Daisuke; Sakaguchi, Hide

    2011-03-01

    Over the last few decades, the computational demands of massive particle-based simulations for both scientific and industrial purposes have been continuously increasing. Hence, considerable efforts are being made to develop parallel computing techniques on various platforms. In such simulations, particles freely move within a given space, and so on a distributed-memory system, load balancing, i.e., assigning an equal number of particles to each processor, is not guaranteed. However, shared-memory systems achieve better load balancing for particle models, but suffer from the intrinsic drawback of memory access competition, particularly during (1) paring of contact candidates from among neighboring particles and (2) force summation for each particle. Here, novel algorithms are proposed to overcome these two problems. For the first problem, the key is a pre-conditioning process during which particle labels are sorted by a cell label in the domain to which the particles belong. Then, a list of contact candidates is constructed by pairing the sorted particle labels. For the latter problem, a table comprising the list indexes of the contact candidate pairs is created and used to sum the contact forces acting on each particle for all contacts according to Newton's third law. With just these methods, memory access competition is avoided without additional redundant procedures. The parallel efficiency and compatibility of these two algorithms were evaluated in discrete element method (DEM) simulations on four types of shared-memory parallel computers: a multicore multiprocessor computer, scalar supercomputer, vector supercomputer, and graphics processing unit. The computational efficiency of a DEM code was found to be drastically improved with our algorithms on all but the scalar supercomputer. Thus, the developed parallel algorithms are useful on shared-memory parallel computers with sufficient memory bandwidth.

  14. An assessment of coupling algorithms for nuclear reactor core physics simulations

    SciTech Connect

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C.T.; Evans, Thomas; Philip, Bobby

    2016-04-15

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.

  15. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE PAGES

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...

    2016-02-06

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  16. [Research on optimization of lower limb parameters of cardiopulmonary resuscitation simulation model based on genetic algorithm].

    PubMed

    Xu, Lin

    2014-10-01

    Sudden cardiac arrest is one of the critical clinical syndromes in emergency situations. A cardiopulmonary resuscitation (CPR) is a necessary curing means for those patients with sudden cardiac arrest. In order to simulate effectively the hemodynamic effects of human under AEI-CPR, which is active compression-decompression CPR coupled with enhanced external counter-pulsation and inspiratory impedance threshold valve, and research physiological parameters of each part of lower limbs in more detail, a CPR simulation model established by Babbs was refined. The part of lower limbs was divided into iliac, thigh and calf, which had 15 physiological parameters. Then, these 15 physiological parameters based on genetic algorithm were optimized, and ideal simulation results were obtained finally.

  17. An assessment of coupling algorithms for nuclear reactor core physics simulations

    SciTech Connect

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby

    2016-04-01

    Here we evaluate the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product was developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Finally, both criticality (k-eigenvalue) and critical boron search problems are considered.

  18. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE PAGES

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...

    2016-04-01

    Here we evaluate the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product was developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK andmore » Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Finally, both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  19. An assessment of coupling algorithms for nuclear reactor core physics simulations

    NASA Astrophysics Data System (ADS)

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby

    2016-04-01

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss-Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton-Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.

  20. An assessment of coupling algorithms for nuclear reactor core physics simulations

    SciTech Connect

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby

    2016-02-06

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.

  1. Bayesian parameter inference and model selection by population annealing in systems biology.

    PubMed

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named "posterior parameter ensemble". We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor.

  2. Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology

    PubMed Central

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832

  3. Use of a novel Hill-climbing genetic algorithm in protein folding simulations.

    PubMed

    Cooper, Lee R; Corne, David W; Crabbe, M James C

    2003-12-01

    We have developed a novel Hill-climbing genetic algorithm (GA) for simulation of protein folding. The program (written in C) builds a set of Cartesian points to represent an unfolded polypeptide's backbone. The dihedral angles determining the chain's configuration are stored in an array of chromosome structures that is copied and then mutated. The fitness of the mutated chain's configuration is determined by its radius of gyration. A four-helix bundle was used to optimise simulation conditions, and the program was compared with other, larger, genetic algorithms on a variety of structures. The program ran 50% faster than other GA programs. Overall, tests on 100 non-redundant structures gave comparable results to other genetic algorithms, with the Hill-climbing program running from between 20 and 50% faster. Examples including crambin, cytochrome c, cytochrome B and hemerythrin gave good secondary structure fits with overall alpha carbon atom rms deviations of between 5 and 5.6 A with an optimised hydrophobic term in the fitness function.

  4. Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.

  5. MODA: a new algorithm to compute optical depths in multidimensional hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Perego, Albino; Gafton, Emanuel; Cabezón, Rubén; Rosswog, Stephan; Liebendörfer, Matthias

    2014-08-01

    Aims: We introduce the multidimensional optical depth algorithm (MODA) for the calculation of optical depths in approximate multidimensional radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Methods: Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any predefined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we use a tree structure that is otherwise used for searching neighbors and calculating gravity. Results: In a series of numerical experiments, we compare the MODA results with analytically known solutions. We also use snapshots from actual 3D simulations and compare the results of MODA with those obtained with other methods, such as the global and local ray-by-ray method. It turns out that MODA achieves excellent accuracy at a moderate computational cost. In appendix we also discuss implementation details and parallelization strategies.

  6. Different genetic algorithms and the evolution of specialization: a study with groups of simulated neural robots.

    PubMed

    Ferrauto, Tomassino; Parisi, Domenico; Di Stefano, Gabriele; Baldassarre, Gianluca

    2013-01-01

    Organisms that live in groups, from microbial symbionts to social insects and schooling fish, exhibit a number of highly efficient cooperative behaviors, often based on role taking and specialization. These behaviors are relevant not only for the biologist but also for the engineer interested in decentralized collective robotics. We address these phenomena by carrying out experiments with groups of two simulated robots controlled by neural networks whose connection weights are evolved by using genetic algorithms. These algorithms and controllers are well suited to autonomously find solutions for decentralized collective robotic tasks based on principles of self-organization. The article first presents a taxonomy of role-taking and specialization mechanisms related to evolved neural network controllers. Then it introduces two cooperation tasks, which can be accomplished by either role taking or specialization, and uses these tasks to compare four different genetic algorithms to evaluate their capacity to evolve a suitable behavioral strategy, which depends on the task demands. Interestingly, only one of the four algorithms, which appears to have more biological plausibility, is capable of evolving role taking or specialization when they are needed. The results are relevant for both collective robotics and biology, as they can provide useful hints on the different processes that can lead to the emergence of specialization in robots and organisms.

  7. Simulation and experiment research of face recognition with modified multi-method morphological correlation algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Xuping, Zhang

    2007-03-01

    Morphological definition of similarity degree of gray-scale image and general definition of morphological correlation (GMC) are proposed. Hardware and software design for a compact joint transform correlator are presented in order to implement GMC. Two kinds of modified general morphological correlation algorithm are proposed. The gray-scale image is decomposed into a set of binary image slices in certain decomposition method. In the first algorithm, the edge of each binary joint image slice is detected, width adjustability of which is investigated, and the joint power spectrum of the edge is summed. In the second algorithm, the joint power spectrum of each pair is binarized or thinned and then summed in one situation, and the summation of the joint power spectrums of these pairs is binarized or thinned in the other situation. Computer-simulation results and real face image recognition results indicate that the modified algorithm can improve the discrimination capabilities with respect to the gray-scale face images of high similarity.

  8. Design and simulation of imaging algorithm for Fresnel telescopy imaging system

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-yu; Liu, Li-ren; Yan, Ai-min; Sun, Jian-feng; Dai, En-wen; Li, Bing

    2011-06-01

    Fresnel telescopy (short for Fresnel telescopy full-aperture synthesized imaging ladar) is a new high resolution active laser imaging technique. This technique is a variant of Fourier telescopy and optical scanning holography, which uses Fresnel zone plates to scan target. Compare with synthetic aperture imaging ladar(SAIL), Fresnel telescopy avoids problem of time synchronization and space synchronization, which decreasing technical difficulty. In one-dimensional (1D) scanning operational mode for moving target, after time-to-space transformation, spatial distribution of sampling data is non-uniform because of the relative motion between target and scanning beam. However, as we use fast Fourier transform (FFT) in the following imaging algorithm of matched filtering, distribution of data should be regular and uniform. We use resampling interpolation to transform the data into two-dimensional (2D) uniform distribution, and accuracy of resampling interpolation process mainly affects the reconstruction results. Imaging algorithms with different resampling interpolation algorithms have been analysis and computer simulation are also given. We get good reconstruction results of the target, which proves that the designed imaging algorithm for Fresnel telescopy imaging system is effective. This work is found to have substantial practical value and offers significant benefit for high resolution imaging system of Fresnel telescopy laser imaging ladar.

  9. Exclusive CHIPS-TPT algorithms for simulation of neutron-nuclear reactions

    NASA Astrophysics Data System (ADS)

    Kosov, Mikhail; Savin, Dmitriy

    2016-09-01

    The CHIPS-TPT physics library for simulation of neutron-nuclear reactions on the new exclusive level is being developed in CFAR VNIIA. The exclusive modeling conserves energy, momentum and quantum numbers in each neutron-nuclear interaction. The CHIPS-TPT algorithms are based on the exclusive CHIPS library, which is compatible with Geant4. Special CHIPS-TPT physics lists in the Geant4 format are provided. The calculation time for an exclusive CHIPS-TPT simulation is comparable to the time of the corresponding inclusive Geant4-HP simulation and much faster for mono-isotopic simulations. In addition to the reduction of the deposited energy fluctuations, which is a consequence of the energy conservation, the CHIPS-TPT libraries provide a possibility of simulation of the secondary particles correlation, e.g. secondary gammas or n-γ correlations, and of the Doppler broadening of the γ-lines in the simulated spectra, which can be measured by germanium detectors.

  10. New Algorithms for Computing the Time-to-Collision in Freeway Traffic Simulation Models

    PubMed Central

    Hou, Jia; List, George F.; Guo, Xiucheng

    2014-01-01

    Ways to estimate the time-to-collision are explored. In the context of traffic simulation models, classical lane-based notions of vehicle location are relaxed and new, fast, and efficient algorithms are examined. With trajectory conflicts being the main focus, computational procedures are explored which use a two-dimensional coordinate system to track the vehicle trajectories and assess conflicts. Vector-based kinematic variables are used to support the calculations. Algorithms based on boxes, circles, and ellipses are considered. Their performance is evaluated in the context of computational complexity and solution time. Results from these analyses suggest promise for effective and efficient analyses. A combined computation process is found to be very effective. PMID:25628650

  11. Macro-micro interlocked simulation algorithm: an exemplification for aurora arc evolution

    NASA Astrophysics Data System (ADS)

    Sato, Tetsuya; Hasegawa, Hiroki; Ohno, Nobuaki

    2009-01-01

    Using an innovative holistic simulation algorithm that can self-consistently treat a system that evolves as cooperation between macroscopic and microscopic processes, the evolution of a colorful aurora arc is beautifully reproduced as the result of cooperation between the global field-aligned feedback instability of the coupled magnetosphere-ionosphere system and the ensuing microscopic ion-acoustic instability that generates electric double layers and accelerates aurora electrons. These results are in agreement with rocket and satellite observations. This shows that the proposed holistic algorithm could be a reliable tool to reveal complex real dramatic events and become, in the near future, a viable scientifically secure prediction tool for natural disasters such as earthquakes, landslides and floods caused by typhoons.

  12. Algorithmic scalability in globally constrained conservative parallel discrete event simulations of asynchronous systems.

    PubMed

    Kolakowska, A; Novotny, M A; Korniss, G

    2003-04-01

    We consider parallel simulations for asynchronous systems employing L processing elements that are arranged on a ring. Processors communicate only among the nearest neighbors and advance their local simulated time only if it is guaranteed that this does not violate causality. In simulations with no constraints, in the infinite L limit the utilization scales [Korniss et al., Phys. Rev. Lett. 84, 1351 (2000)]; but, the width of the virtual time horizon diverges (i.e., the measurement phase of the algorithm does not scale). In this work, we introduce a moving Delta-window global constraint, which modifies the algorithm so that the measurement phase scales as well. We present results of systematic studies in which the system size (i.e., L and the volume load per processor) as well as the constraint are varied. The Delta constraint eliminates the extreme fluctuations in the virtual time horizon, provides a bound on its width, and controls the average progress rate. The width of the Delta window can serve as a tuning parameter that, for a given volume load per processor, could be adjusted to optimize the utilization, so as to maximize the efficiency. This result may find numerous applications in modeling the evolution of general spatially extended short-range interacting systems with asynchronous dynamics, including dynamic Monte Carlo studies.

  13. A Linked Simulation-Optimization (LSO) Model for Conjunctive Irrigation Management using Clonal Selection Algorithm

    NASA Astrophysics Data System (ADS)

    Islam, Sirajul; Talukdar, Bipul

    2016-09-01

    A Linked Simulation-Optimization (LSO) model based on a Clonal Selection Algorithm (CSA) was formulated for application in conjunctive irrigation management. A series of measures were considered for reducing the computational burden associated with the LSO approach. Certain modifications were incurred to the formulated CSA, so as to decrease the number of function evaluations. In addition, a simple problem specific code for a two dimensional groundwater flow simulation model was developed. The flow model was further simplified by a novel approach of area reduction, in order to save computational time in simulation. The LSO model was applied in the irrigation command of the Pagladiya Dam Project in Assam, India. With a view to evaluate the performance of the CSA, a Genetic Algorithm (GA) was used as a comparison base. The results from the CSA compared well with those from the GA. In fact, the CSA was found to consume less computational time than the GA while converging to the optimal solution, due to the modifications incurred in it.

  14. Springback Simulation and Tool Surface Compensation Algorithm for Sheet Metal Forming

    SciTech Connect

    Shen Guozhe; Hu Ping; Zhang Xiangkui; Chen Xiaobin; Li Xiaoda

    2005-08-05

    Springback is an unquenchable forming defect in the sheet metal forming process. How to calculate springback accurately is a big challenge for a lot of FEA software. Springback compensation makes the stamped final part accordant with the designed part shape by modifying tool surface, which depends on the accurate springback amount. How ever, the meshing data based on numerical simulation is expressed by nodes and elements, such data can not be supplied directly to tool surface CAD data. In this paper, a tool surface compensation algorithm based on numerical simulation technique of springback process is proposed in which the independently developed dynamic explicit springback algorithm (DESA) is used to simulate springback amount. When doing the tool surface compensation, the springback amount of the projected point can be obtained by interpolation of the springback amount of the projected element nodes. So the modified values of tool surface can be calculated reversely. After repeating the springback and compensation calculations for 1{approx}3 times, the reasonable tool surface mesh is gained. Finally, the FEM data on the compensated tool surface is fitted into the surface by CAD modeling software. The examination of a real industrial part shows the validity of the present method.

  15. Simulation of flow in the microcirculation using a hybrid Lattice-Boltzman and Finite Element algorithm

    NASA Astrophysics Data System (ADS)

    Gonzalez-Mancera, Andres; Gonzalez Cardenas, Diego

    2014-11-01

    Flow in the microcirculation is highly dependent on the mechanical properties of the cells suspended in the plasma. Red blood cells have to deform in order to pass through the smaller sections in the microcirculation. Certain deceases change the mechanical properties of red blood cells affecting its ability to deform and the rheological behaviour of blood. We developed a hybrid algorithm based on the Lattice-Boltzmann and Finite Element methods to simulate blood flow in small capillaries. Plasma was modeled as a Newtonian fluid and the red blood cells' membrane as a hyperelastic solid. The fluid-structure interaction was handled using the immersed boundary method. We simulated the flow of plasma with suspended red blood cells through cylindrical capillaries and measured the pressure drop as a function of the membrane's rigidity. We also simulated the flow through capillaries with a restriction and identify critical properties for which the suspended particles are unable to flow. The algorithm output was verified by reproducing certain common features of flow int he microcirculation such as the Fahraeus-Lindqvist effect.

  16. Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm.

    PubMed

    Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li

    2017-03-01

    The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.

  17. Statistical analysis of piloted simulation of real time trajectory optimization algorithms

    NASA Technical Reports Server (NTRS)

    Price, D. B.

    1982-01-01

    A simulation of time-optimal intercept algorithms for on-board computation of control commands is described. The effects of three different display modes and two different computation modes on the pilots' ability to intercept a moving target in minimum time were tested. Both computation modes employed singular perturbation theory to help simplify the two-point boundary value problem associated with trajectory optimization. Target intercept time was affected by both the display and computation modes chosen, but the display mode chosen was the only significant influence on the miss distance.

  18. Effectiveness of Intravenous Infusion Algorithms for Glucose Control in Diabetic Patients Using Different Simulation Models

    PubMed Central

    Farmer, Terry G.; Edgar, Thomas F.

    2009-01-01

    The effectiveness of closed-loop insulin infusion algorithms is assessed for three different mathematical models describing insulin and glucose dynamics within a Type I diabetes patient. Simulations are performed to assess the effectiveness of proportional plus integral plus derivative (PID) control, feedforward control, and a physiologically-based control system with respect to maintaining normal glucose levels during a meal and during exercise. Control effectiveness is assessed by comparing the simulated response to a simulation of a healthy patient during both a meal and exercise and establishing maximum and minimum glucose levels and insulin infusion levels, as well as maximum duration of hyperglycemia. Controller effectiveness is assessed within the minimal model, the Sorensen model, and the Hovorka model. Results showed that no type of control was able to maintain normal conditions when simulations were performed using the minimal model. For both the Sorensen model and the Hovorka model, proportional control was sufficient to maintain normal glucose levels. Given published clinical data showing the ineffectiveness of PID control in patients, the work demonstrates that controller success based on simulation results can be misleading, and that future work should focus on addressing the model discrepancies. PMID:20161147

  19. Stochastic simulation of reaction subnetworks: Exploiting synergy between the chemical master equation and the Gillespie algorithm

    NASA Astrophysics Data System (ADS)

    Albert, J.

    2016-12-01

    Stochastic simulation of reaction networks is limited by two factors: accuracy and time. The Gillespie algorithm (GA) is a Monte Carlo-type method for constructing probability distribution functions (pdf) from statistical ensembles. Its accuracy is therefore a function of the computing time. The chemical master equation (CME) is a more direct route to obtaining the pdfs, however, solving the CME is generally very difficult for large networks. We propose a method that combines both approaches in order to simulate stochastically a part of a network. The network is first divided into two parts: A and B. Part A is simulated using the GA, while the solution of the CME for part B, with initial conditions imposed by simulation results of part A, is fed back into the GA. This cycle is then repeated a desired number of times. The advantage of this synergy between the two approaches is: 1) the GA needs to simulate only a part of the whole network, and hence is faster, and 2) the CME is necessarily simpler to solve, as the part of the network it describes is smaller. We will demonstrate on two examples - a positive feedback (genetic switch) and oscillations driven by a negative feedback - the utility of this approach.

  20. An efficient ray tracing algorithm for the simulation of light trapping effects in Si solar cells with textured surfaces.

    PubMed

    Byun, Seok Yong; Byun, Seok-Joo; Lee, Jang Kyo; Kim, Jae Wan; Lee, Taek Sung; Sheen, Dongwoo; Cho, Kyuman; Tark, Sung Ju; Kim, Donghwan; Kim, Won Mok

    2012-04-01

    Optimizing the design of the surface texture is an essential aspect of Si solar cell technology as it can maximize the light trapping efficiency of the cells. The proper simulation tools can provide efficient means of designing and analyzing the effects of the texture patterns on light confinement in an active medium. In this work, a newly devised algorithm termed Slab-Outline, based on a ray tracing technique, is reported. The details of the intersection searching logic adopted in Slab-Outline algorithm are also discussed. The efficiency of the logic was tested by comparing the computing time between the current algorithm and the Constructive Solid Geometry algorithm, and its superiority in computing speed was proved. The validity of the new algorithm was verified by comparing the simulated reflectance spectra with the measured spectra from a textured Si surface.

  1. Quantum mechanical NMR simulation algorithm for protein-size spin systems.

    PubMed

    Edwards, Luke J; Savostyanov, D V; Welderufael, Z T; Lee, Donghan; Kuprov, Ilya

    2014-06-01

    Nuclear magnetic resonance spectroscopy is one of the few remaining areas of physical chemistry for which polynomially scaling quantum mechanical simulation methods have not so far been available. In this communication we adapt the restricted state space approximation to protein NMR spectroscopy and illustrate its performance by simulating common 2D and 3D liquid state NMR experiments (including accurate description of relaxation processes using Bloch-Redfield-Wangsness theory) on isotopically enriched human ubiquitin - a protein containing over a thousand nuclear spins forming an irregular polycyclic three-dimensional coupling lattice. The algorithm uses careful tailoring of the density operator space to only include nuclear spin states that are populated to a significant extent. The reduced state space is generated by analysing spin connectivity and decoherence properties: rapidly relaxing states as well as correlations between topologically remote spins are dropped from the basis set.

  2. Variable timestep algorithm for molecular dynamics simulation of non-equilibrium processes

    NASA Astrophysics Data System (ADS)

    Marks, Nigel A.; Robinson, Marc

    2015-06-01

    A simple, yet robust variable timestep algorithm is developed for use in molecular dynamics simulations of energetic processes. Single-particle Kepler orbits are studied to study the relationship between trajectory properties and the critical timestep for constant integration error. Over a wide variety of conditions the magnitude of the maximum force is found to correlate linearly with the inverse critical timestep. Other quantities used in the literature such as the time derivative of the force and the product of the velocity and force also show reasonable correlations, but not to the same extent. Application of the corresponding metric | |Fmax | | Δt in molecular dynamics simulation of radiation damage in graphite shows that the scheme is both straightforward to implement and effective. In tests on a 1 keV cascade the timestep varies by over two orders of magnitude with minimal loss of energy conservation.

  3. An explicit algorithm for fully flexible unit cell simulation with recursive thermostat chains.

    PubMed

    Jung, Kwangsub; Cho, Maenghyo

    2008-10-28

    Through the combination of the recursive multiple thermostat (RMT) Nose-Poincare and Parrinello-Rahman methods, the recursive multiple thermostat chained fully flexible unit cell (RMT-NsigmaT) molecular dynamics method is proposed for isothermal-isobaric simulation. The RMT method is known to have the advantage of achieving the ergodicity that is required for canonical sampling of the harmonic oscillator. Thus, an explicit time integration algorithm is developed for RMT-NsigmaT. We examine the ergodicity for various parameters of RMT-NsigmaT using bulk and thin film structures with different numbers of copper atoms and thicknesses in various environments. Through the numerical simulations, we conclude that the RMT-NsigmaT method is advantageous in the cases of lower temperatures.

  4. Exploring Scheduling Algorithms and Analysis Tools for the LSST Operations Simulations

    NASA Astrophysics Data System (ADS)

    Petry, Catherine E.; Miller, M.; Cook, K. H.; Ridgway, S.; Chandrasekharan, S.; Jones, R. L.; Krughoff, K. S.; Ivezic, Z.; Krabbendam, V.

    2012-01-01

    The LSST Operations Simulator models the telescope's design-specific opto-mechanical system performance and site-specific conditions to simulate how observations may be obtained during a 10-year survey. We have found that a remarkable range of science programs are compatible with a single feasible cadence. The current version, OpSim v2.5, incorporates detailed models of the telescope and dome, the camera, weather and a more realistic model for scheduled and unscheduled downtime, as well as a scheduling strategy based on ranking requests for observations from a small number of observing modes attempting to optimize the key science objectives. Each observing mode is driven by a specific algorithm which ranks field-filter combinations of target fields to observe next. The output of the simulator is a detailed record of the activity of the telescope - such as position on the sky, slew activities, weather and various types of downtime - stored in a mySQL database. Sophisticated tools are required to mine this database in order to assess the degree of success of any simulated survey in some detail. An analysis pipeline has been created (SSTAR) which generates a standard report describing the basic characteristics of a simulated survey; a new analysis framework is being designed to allow for the inter-comparison of one or more simulated surveys and to perform more complex analyses in a pipeline fashion. Proprietary software is being used to interactively explore the database and to prototype reports for the new analysis pipeline, and we are working with the ASCOT team (http://ascot.astro.washington.edu) to determine the feasibility of creating our own interactive tools. The next phase of simulator development is being planned to include look-ahead to continue investigating the trade-offs of addressing multiple science goals within a single LSST survey.

  5. Simulation of Long Lived Tracers Using an Improved Empirically Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal

  6. State-dependent doubly weighted stochastic simulation algorithm for automatic characterization of stochastic biochemical rare events

    NASA Astrophysics Data System (ADS)

    Roh, Min K.; Daigle, Bernie J.; Gillespie, Dan T.; Petzold, Linda R.

    2011-12-01

    In recent years there has been substantial growth in the development of algorithms for characterizing rare events in stochastic biochemical systems. Two such algorithms, the state-dependent weighted stochastic simulation algorithm (swSSA) and the doubly weighted SSA (dwSSA) are extensions of the weighted SSA (wSSA) by H. Kuwahara and I. Mura [J. Chem. Phys. 129, 165101 (2008)], 10.1063/1.2987701. The swSSA substantially reduces estimator variance by implementing system state-dependent importance sampling (IS) parameters, but lacks an automatic parameter identification strategy. In contrast, the dwSSA provides for the automatic determination of state-independent IS parameters, thus it is inefficient for systems whose states vary widely in time. We present a novel modification of the dwSSA—the state-dependent doubly weighted SSA (sdwSSA)—that combines the strengths of the swSSA and the dwSSA without inheriting their weaknesses. The sdwSSA automatically computes state-dependent IS parameters via the multilevel cross-entropy method. We apply the method to three examples: a reversible isomerization process, a yeast polarization model, and a lac operon model. Our results demonstrate that the sdwSSA offers substantial improvements over previous methods in terms of both accuracy and efficiency.

  7. A Multirate Variable-timestep Algorithm for N-body Solar System Simulations with Collisions

    NASA Astrophysics Data System (ADS)

    Sharp, P. W.; Newman, W. I.

    2016-03-01

    We present and analyze the performance of a new algorithm for performing accurate simulations of the solar system when collisions between massive bodies and test particles are permitted. The orbital motion of all bodies at all times is integrated using a high-order variable-timestep explicit Runge-Kutta Nyström (ERKN) method. The variation in the timestep ensures that the orbital motion of test particles on eccentric orbits or close to the Sun is calculated accurately. The test particles are divided into groups and each group is integrated using a different sequence of timesteps, giving a multirate algorithm. The ERKN method uses a high-order continuous approximation to the position and velocity when checking for collisions across a step. We give a summary of the extensive testing of our algorithm. In our largest simulation—that of the Sun, the planets Earth to Neptune and 100,000 test particles over 100 million years—the relative error in the energy after 100 million years was of the order of 10-11.

  8. An object localization optimization technique in medical images using plant growth simulation algorithm.

    PubMed

    Bhattacharjee, Deblina; Paul, Anand; Kim, Jeong Hong; Kim, Mucheol

    2016-01-01

    The analysis of leukocyte images has drawn interest from fields of both medicine and computer vision for quite some time where different techniques have been applied to automate the process of manual analysis and classification of such images. Manual analysis of blood samples to identify leukocytes is time-consuming and susceptible to error due to the different morphological features of the cells. In this article, the nature-inspired plant growth simulation algorithm has been applied to optimize the image processing technique of object localization of medical images of leukocytes. This paper presents a random bionic algorithm for the automated detection of white blood cells embedded in cluttered smear and stained images of blood samples that uses a fitness function that matches the resemblances of the generated candidate solution to an actual leukocyte. The set of candidate solutions evolves via successive iterations as the proposed algorithm proceeds, guaranteeing their fit with the actual leukocytes outlined in the edge map of the image. The higher precision and sensitivity of the proposed scheme from the existing methods is validated with the experimental results of blood cell images. The proposed method reduces the feasible sets of growth points in each iteration, thereby reducing the required run time of load flow, objective function evaluation, thus reaching the goal state in minimum time and within the desired constraints.

  9. A generalized prestressing algorithm for finite element simulations of preloaded geometries with application to the aorta.

    PubMed

    Weisbecker, Hannah; Pierce, David M; Holzapfel, Gerhard A

    2014-09-01

    Finite element models reconstructed from medical imaging data, for example, computed tomography or MRI scans, generally represent geometries under in vivo load. Classical finite element approaches start from an unloaded reference configuration. We present a generalized prestressing algorithm based on a concept introduced by Gee et al. (Int. J. Num. Meth. Biomed. Eng. 26:52-72, 2012) in which an incremental update of the displacement field in the classical approach is replaced by an incremental update of the deformation gradient field. Our generalized algorithm can be implemented in existing finite element codes with relatively low implementation effort on the element level and is suitable for material models formulated in the current or initial configurations. Applicable to any finite element simulations started from preloaded geometries, we demonstrate the algorithm and its convergence properties on an academic example and on a segment of a thoracic aorta meshed from MRI data. Furthermore, we present an example to discuss the influence of neglecting prestresses in geometries obtained from medical images, a topic on which conflicting statements are found in the literature.

  10. Simulation of the Predictive Control Algorithm for Container Crane Operation using Matlab Fuzzy Logic Tool Box

    NASA Technical Reports Server (NTRS)

    Richardson, Albert O.

    1997-01-01

    This research has investigated the use of fuzzy logic, via the Matlab Fuzzy Logic Tool Box, to design optimized controller systems. The engineering system for which the controller was designed and simulate was the container crane. The fuzzy logic algorithm that was investigated was the 'predictive control' algorithm. The plant dynamics of the container crane is representative of many important systems including robotic arm movements. The container crane that was investigated had a trolley motor and hoist motor. Total distance to be traveled by the trolley was 15 meters. The obstruction height was 5 meters. Crane height was 17.8 meters. Trolley mass was 7500 kilograms. Load mass was 6450 kilograms. Maximum trolley and rope velocities were 1.25 meters per sec. and 0.3 meters per sec., respectively. The fuzzy logic approach allowed the inclusion, in the controller model, of performance indices that are more effectively defined in linguistic terms. These include 'safety' and 'cargo swaying'. Two fuzzy inference systems were implemented using the Matlab simulation package, namely the Mamdani system (which relates fuzzy input variables to fuzzy output variables), and the Sugeno system (which relates fuzzy input variables to crisp output variable). It is found that the Sugeno FIS is better suited to including aspects of those plant dynamics whose mathematical relationships can be determined.

  11. Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII

    SciTech Connect

    McKinney, Gregg W

    2012-07-17

    Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.

  12. Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak

    2010-01-01

    Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions

  13. An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models

    PubMed Central

    Wise, S.M.; Lowengrub, J.S.; Cristini, V.

    2010-01-01

    In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663

  14. Numerical stability of relativistic beam multidimensional PIC simulations employing the Esirkepov algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc

    2013-09-01

    Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.

  15. Recent progress of quantum annealing

    SciTech Connect

    Suzuki, Sei

    2015-03-10

    We review the recent progress of quantum annealing. Quantum annealing was proposed as a method to solve generic optimization problems. Recently a Canadian company has drawn a great deal of attention, as it has commercialized a quantum computer based on quantum annealing. Although the performance of quantum annealing is not sufficiently understood, it is likely that quantum annealing will be a practical method both on a conventional computer and on a quantum computer.

  16. A novel parallel-rotation algorithm for atomistic Monte Carlo simulation of dense polymer systems

    NASA Astrophysics Data System (ADS)

    Santos, S.; Suter, U. W.; Müller, M.; Nievergelt, J.

    2001-06-01

    We develop and test a new elementary Monte Carlo move for use in the off-lattice simulation of polymer systems. This novel Parallel-Rotation algorithm (ParRot) permits moving very efficiently torsion angles that are deeply inside long chains in melts. The parallel-rotation move is extremely simple and is also demonstrated to be computationally efficient and appropriate for Monte Carlo simulation. The ParRot move does not affect the orientation of those parts of the chain outside the moving unit. The move consists of a concerted rotation around four adjacent skeletal bonds. No assumption is made concerning the backbone geometry other than that bond lengths and bond angles are held constant during the elementary move. Properly weighted sampling techniques are needed for ensuring detailed balance because the new move involves a correlated change in four degrees of freedom along the chain backbone. The ParRot move is supplemented with the classical Metropolis Monte Carlo, the Continuum-Configurational-Bias, and Reptation techniques in an isothermal-isobaric Monte Carlo simulation of melts of short and long chains. Comparisons are made with the capabilities of other Monte Carlo techniques to move the torsion angles in the middle of the chains. We demonstrate that ParRot constitutes a highly promising Monte Carlo move for the treatment of long polymer chains in the off-lattice simulation of realistic models of dense polymer systems.

  17. Cell light scattering characteristic numerical simulation research based on FDTD algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong

    2017-01-01

    In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.

  18. An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys

    SciTech Connect

    Becker, R; Stolken, J; Jannetti, C; Bassani, J

    2003-10-16

    Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numerical simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.

  19. Conduct of an algorithm in quantifying simulated palatal surface tooth erosion.

    PubMed

    Chadwick, R G; Mitchell, H L

    2001-05-01

    In order to test the ability of an algorithm to quantify simulated palatal erosion, a total of 10 extracted permanent upper central incisors were mounted in brass blocks. Baseline impressions were recorded using an addition cured silicone impression material in a metal impression tray. Once set and removed from the teeth, the impressions were coated twice with a high silver content electroconductive paint, applied using a brush, before being backed up with die stone to form an electroconductive replica. Each tooth was then subject to three treatments: application of phosphoric acid etchant gel for 60 s, application of etchant gel for 120 s and immersion for 3 h in Diet Coca-Cola*. After each one the replication process was repeated. Thereafter all replicas were mapped using a computer controlled electrical probe and the resultant digital terrain models (DTMS) compared using a surface matching and difference detection algorithm (SMADDA). Surface matching was unsuccessful only in one instance. As the duration of the insult increased, so did the proportion of the surface that underwent change to a maximum of 33.3%. Anatomical site was significantly (P < 0.05) associated with the susceptibility to erosion. The cingulum periphery appeared most resistant to this. The algorithmic approach offers much scope for monitoring dental erosion as acid dissolution of the tooth's surface appears to occur gradually. The cingulum region appears relatively more resistant to this process than other tooth sites and thus facilitates the process of surface matching. Further testing is however, required to determine precisely the algorithm's upper tolerance level.

  20. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  1. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya

    2014-05-01

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 106-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques

  2. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    SciTech Connect

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya; Kunaseth, Manaschai; Ohmura, Satoshi; Shimamura, Kohei

    2014-05-14

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of

  3. Unraveling Quantum Annealers using Classical Hardness

    NASA Astrophysics Data System (ADS)

    Martin-Mayor, Victor; Hen, Itay

    2015-10-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip.

  4. Unraveling Quantum Annealers using Classical Hardness.

    PubMed

    Martin-Mayor, Victor; Hen, Itay

    2015-10-20

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as 'D-Wave' chips, promise to solve practical optimization problems potentially faster than conventional 'classical' computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize 'temperature chaos' as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip.

  5. Unraveling Quantum Annealers using Classical Hardness

    PubMed Central

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  6. Annealing macromolecular crystals.

    PubMed

    Hanson, B Leif; Bunick, Gerard J

    2007-01-01

    The process of crystal annealing has been used to improve the quality of diffraction from crystals that would otherwise be discarded for displaying unsatisfactory diffraction after flash cooling. Although techniques and protocols vary, macromolecular crystals are annealed by warming the flash-cooled crystal, then flash cooling it again. To apply macromolecular crystal annealing, a flash-cooled crystal displaying unacceptably high mosaicity or diffraction from ice is removed from the goniometer and immediately placed in cryoprotectant buffer. The crystal is incubated in the buffer at either room temperature or the temperature at which the crystal was grown. After about 3 min, the crystal is remounted in the loop and flash cooled. In situ annealing techniques, where the cold stream is diverted and the crystal allowed to warm on the loop prior to flash cooling, are variations of annealing that appears to work best when large solvent channels are not present in the crystal lattice or the solvent content of the crystal is relatively low.

  7. Development and evaluation of a micro-macro algorithm for the simulation of polymer flow

    SciTech Connect

    Feigl, Kathleen . E-mail: feigl@mtu.edu; Tanner, Franz X.

    2006-07-20

    A micro-macro algorithm for the calculation of polymer flow is developed and numerically evaluated. The system being solved consists of the momentum and mass conservation equations from continuum mechanics coupled with a microscopic-based rheological model for polymer stress. Standard finite element techniques are used to solve the conservation equations for velocity and pressure, while stochastic simulation techniques are used to compute polymer stress from the simulated polymer dynamics in the rheological model. The rheological model considered combines aspects of reptation, network and continuum models. Two types of spatial approximation are considered for the configuration fields defining the dynamics in the model: piecewise constant and piecewise linear. The micro-macro algorithm is evaluated by simulating the abrupt planar die entry flow of a polyisobutylene solution described in the literature. The computed velocity and stress fields are found to be essentially independent of mesh size and ensemble size, while there is some dependence of the results on the order of spatial approximation to the configuration fields close to the die entry. Comparison with experimental data shows that the piecewise linear approximation leads to better predictions of the centerline first normal stress difference. Finally, the computational time associated with the piecewise constant spatial approximation is found to be about 2.5 times lower than that associated with the piecewise linear approximation. This is the result of the more efficient time integration scheme that is possible with the former type of approximation due to the pointwise incompressibility guaranteed by the choice of velocity-pressure finite element.

  8. Polynomial-time quantum algorithm for the simulation of chemical dynamics.

    PubMed

    Kassal, Ivan; Jordan, Stephen P; Love, Peter J; Mohseni, Masoud; Aspuru-Guzik, Alán

    2008-12-02

    The computational cost of exact methods for quantum simulation using classical computers grows exponentially with system size. As a consequence, these techniques can be applied only to small systems. By contrast, we demonstrate that quantum computers could exactly simulate chemical reactions in polynomial time. Our algorithm uses the split-operator approach and explicitly simulates all electron-nuclear and interelectronic interactions in quadratic time. Surprisingly, this treatment is not only more accurate than the Born-Oppenheimer approximation but faster and more efficient as well, for all reactions with more than about four atoms. This is the case even though the entire electronic wave function is propagated on a grid with appropriately short time steps. Although the preparation and measurement of arbitrary states on a quantum computer is inefficient, here we demonstrate how to prepare states of chemical interest efficiently. We also show how to efficiently obtain chemically relevant observables, such as state-to-state transition probabilities and thermal reaction rates. Quantum computers using these techniques could outperform current classical computers with 100 qubits.

  9. Adaptive particle-cell algorithm for Fokker-Planck based rarefied gas flow simulations

    NASA Astrophysics Data System (ADS)

    Pfeiffer, M.; Gorji, M. H.

    2017-04-01

    Recently, the Fokker-Planck (FP) kinetic model has been devised on the basis of the Boltzmann equation (Jenny et al., 2010; Gorji et al., 2011). Particle Monte-Carlo schemes are then introduced for simulations of rarefied gas flows based on the FP kinetics. Here the particles follow independent stochastic paths and thus a spatio-temporal resolution coarser than the collisional scales becomes possible. In contrast to the direct simulation Monte-Carlo (DSMC), the computational cost is independent of the Knudsen number resulting in efficient simulations at moderate/low Knudsen flows. In order to further exploit the efficiency of the FP method, the required particle-cell resolutions should be found, and a cell refinement strategy has to be developed accordingly. In this study, an adaptive particle-cell scheme applicable to a general unstructured mesh is derived for the FP model. Virtual sub cells are introduced for the adaptive mesh refinement. Moreover a sub cell-merging algorithm is provided to honor the minimum required number of particles per cell. For assessments, the 70 degree blunted cone reentry flow (Allgre et al., 1997) is studied. Excellent agreement between the introduced adaptive FP method and DSMC is achieved.

  10. An efficient algorithm for fully resolved simulation of freely swimming bodies

    NASA Astrophysics Data System (ADS)

    Shirgaonkar, Anup; Patankar, Neelesh; Maciver, Malcolm

    2007-11-01

    There is a need to better understand the physical principles underlying the extraordinary mobility of swimming and flying animals. To that end, we present a fully resolved simulation scheme for aquatic locomotion that is sufficiently general to potentially function for small flying animals as well. The method combines the rigid particulate scheme of Patankar et al. (IJMF, 2001) with a momentum redistribution scheme to consistently solve for fluid-body forces as well as the swimming velocity. The input to the algorithm is the deforming motion of the fish body or its fins in the frame of reference of the fish. The method is designed to be efficient, parallelizable, and can be easily implemented into existing fluid dynamics codes. We demonstrate that the new method is capable of simulating variety of fish forms including flexible bodies such as an eel, or bodies with flexible fins attached to them such as the blackghost knifefish (Apteronotus albifrons). Insights into the hydrodynamics of aquatic locomotion based on our simulations will be summarized. The proposed technique is also applicable to variety of problems such as designing underwater vehicles, neuromechanical modeling, understanding the role of hydrodynamics on the evolution of fish forms, and animation.

  11. Dissipative Particle Dynamics Simulations at Extreme Scale: GPU Algorithms, Implementation and Applications

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Karniadakis, George; Crunch Team

    2014-03-01

    We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to illustrate the practicality of our code in real-world applications. This work was supported by the new Department of Energy Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4). Simulations were carried out at the Oak Ridge Leadership Computing Facility through the INCITE program under project BIP017.

  12. Efficient multiple-way graph partitioning algorithms

    SciTech Connect

    Dasdan, A.; Aykanat, C.

    1995-12-01

    Graph partitioning deals with evenly dividing a graph into two or more parts such that the total weight of edges interconnecting these parts, i.e., cutsize, is minimized. Graph partitioning has important applications in VLSI layout, mapping, and sparse Gaussian elimination. Since graph partitioning problem is NP-hard, we should resort to polynomial-time algorithms to obtain a good solution, or hopefully a near-optimal solution. Kernighan-Lin (KL) propsoed a 2-way partitioning algorithms. Fiduccia-Mattheyses (FM) introduced a faster version of KL algorithm. Sanchis (FMS) generalized FM algorithm to a multiple-way partitioning algorithm. Simulated Annealing (SA) is one of the most successful approaches that are not KL-based.

  13. Residual Elimination Algorithm Enhancements to Improve Foot Motion Tracking During Forward Dynamic Simulations of Gait.

    PubMed

    Jackson, Jennifer N; Hass, Chris J; Fregly, Benjamin J

    2015-11-01

    Patient-specific gait optimizations capable of predicting post-treatment changes in joint motions and loads could improve treatment design for gait-related disorders. To maximize potential clinical utility, such optimizations should utilize full-body three-dimensional patient-specific musculoskeletal models, generate dynamically consistent gait motions that reproduce pretreatment marker measurements closely, and achieve accurate foot motion tracking to permit deformable foot-ground contact modeling. This study enhances an existing residual elimination algorithm (REA) Remy, C. D., and Thelen, D. G., 2009, “Optimal Estimation of Dynamically Consistent Kinematics and Kinetics for Forward Dynamic Simulation of Gait,” ASME J. Biomech. Eng., 131(3), p. 031005) to achieve all three requirements within a single gait optimization framework. We investigated four primary enhancements to the original REA: (1) manual modification of tracked marker weights, (2) automatic modification of tracked joint acceleration curves, (3) automatic modification of algorithm feedback gains, and (4) automatic calibration of model joint and inertial parameter values. We evaluated the enhanced REA using a full-body three-dimensional dynamic skeletal model and movement data collected from a subject who performed four distinct gait patterns: walking, marching, running, and bounding. When all four enhancements were implemented together, the enhanced REA achieved dynamic consistency with lower marker tracking errors for all segments, especially the feet (mean root-mean-square (RMS) errors of 3.1 versus 18.4 mm), compared to the original REA. When the enhancements were implemented separately and in combinations, the most important one was automatic modification of tracked joint acceleration curves, while the least important enhancement was automatic modification of algorithm feedback gains. The enhanced REA provides a framework for future gait optimization studies that seek to predict subject

  14. Mitigating Multipath Bias Using a Dual-Polarization Antenna: Theoretical Performance, Algorithm Design, and Simulation

    PubMed Central

    Xie, Lin; Cui, Xiaowei; Zhao, Sihao; Lu, Mingquan

    2017-01-01

    It is well known that multipath effect remains a dominant error source that affects the positioning accuracy of Global Navigation Satellite System (GNSS) receivers. Significant efforts have been made by researchers and receiver manufacturers to mitigate multipath error in the past decades. Recently, a multipath mitigation technique using dual-polarization antennas has become a research hotspot for it provides another degree of freedom to distinguish the line-of-sight (LOS) signal from the LOS and multipath composite signal without extensively increasing the complexity of the receiver. Numbers of multipath mitigation techniques using dual-polarization antennas have been proposed and all of them report performance improvement over the single-polarization methods. However, due to the unpredictability of multipath, multipath mitigation techniques based on dual-polarization are not always effective while few studies discuss the condition under which the multipath mitigation using a dual-polarization antenna can outperform that using a single-polarization antenna, which is a fundamental question for dual-polarization multipath mitigation (DPMM) and the design of multipath mitigation algorithms. In this paper we analyze the characteristics of the signal received by a dual-polarization antenna and use the maximum likelihood estimation (MLE) to assess the theoretical performance of DPMM in different received signal cases. Based on the assessment we answer this fundamental question and find the dual-polarization antenna’s capability in mitigating short delay multipath—the most challenging one among all types of multipath for the majority of the multipath mitigation techniques. Considering these effective conditions, we propose a dual-polarization sequential iterative maximum likelihood estimation (DP-SIMLE) algorithm for DPMM. The simulation results verify our theory and show superior performance of the proposed DP-SIMLE algorithm over the traditional one using only an

  15. Mitigating Multipath Bias Using a Dual-Polarization Antenna: Theoretical Performance, Algorithm Design, and Simulation.

    PubMed

    Xie, Lin; Cui, Xiaowei; Zhao, Sihao; Lu, Mingquan

    2017-02-13

    It is well known that multipath effect remains a dominant error source that affects the positioning accuracy of Global Navigation Satellite System (GNSS) receivers. Significant efforts have been made by researchers and receiver manufacturers to mitigate multipath error in the past decades. Recently, a multipath mitigation technique using dual-polarization antennas has become a research hotspot for it provides another degree of freedom to distinguish the line-of-sight (LOS) signal from the LOS and multipath composite signal without extensively increasing the complexity of the receiver. Numbers of multipath mitigation techniques using dual-polarization antennas have been proposed and all of them report performance improvement over the single-polarization methods. However, due to the unpredictability of multipath, multipath mitigation techniques based on dual-polarization are not always effective while few studies discuss the condition under which the multipath mitigation using a dual-polarization antenna can outperform that using a single-polarization antenna, which is a fundamental question for dual-polarization multipath mitigation (DPMM) and the design of multipath mitigation algorithms. In this paper we analyze the characteristics of the signal received by a dual-polarization antenna and use the maximum likelihood estimation (MLE) to assess the theoretical performance of DPMM in different received signal cases. Based on the assessment we answer this fundamental question and find the dual-polarization antenna's capability in mitigating short delay multipath-the most challenging one among all types of multipath for the majority of the multipath mitigation techniques. Considering these effective conditions, we propose a dual-polarization sequential iterative maximum likelihood estimation (DP-SIMLE) algorithm for DPMM. The simulation results verify our theory and show superior performance of the proposed DP-SIMLE algorithm over the traditional one using only an RHCP

  16. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    SciTech Connect

    Nazareth, D; Spaans, J

    2014-06-15

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objective function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.

  17. A TR-induced algorithm for hot spots elimination through CT-scan HIFU simulations

    NASA Astrophysics Data System (ADS)

    Leduc, Nicolas; Okita, Kohei; Sugiyama, Kazuyasu; Takagi, Shu; Matsumoto, Yoichiro

    2011-09-01

    Although nowadays widely spread for imaging and treatments uses, HIFU techniques are still limited by the distortion of the wavefront due to refraction and reflection on the inhomogeneous media inside the human body. CT-scan Time Reversal (TR) procedure has risen as a promising candidate for focus control. A finite difference time domain parallelized code is used to provide simulations of TR-enhanced propagation through elements of the human body and implement a simple algorithm to address the issue of grating lobes, i.e secondary peaks of pressure due to natural diffraction by phased arrays and enhanced by medium heterogeneity. Using an iterative, progressive process combining secondary sound sources and independent signal summation, the primary peak is strengthened while secondary peaks are increasingly obliterated. This method supports the feasibility of precise modification and enhancement of the pressure profile in the targeted area through Time Reversal based solutions.

  18. Computer simulation and evaluation of edge detection algorithms and their application to automatic path selection

    NASA Technical Reports Server (NTRS)

    Longendorfer, B. A.

    1976-01-01

    The construction of an autonomous roving vehicle requires the development of complex data-acquisition and processing systems, which determine the path along which the vehicle travels. Thus, a vehicle must possess algorithms which can (1) reliably detect obstacles by processing sensor data, (2) maintain a constantly updated model of its surroundings, and (3) direct its immediate actions to further a long range plan. The first function consisted of obstacle recognition. Obstacles may be identified by the use of edge detection techniques. Therefore, the Kalman Filter was implemented as part of a large scale computer simulation of the Mars Rover. The second function consisted of modeling the environment. The obstacle must be reconstructed from its edges, and the vast amount of data must be organized in a readily retrievable form. Therefore, a Terrain Modeller was developed which assembled and maintained a rectangular grid map of the planet. The third function consisted of directing the vehicle's actions.

  19. Algorithmic Extensions of Low-Dispersion Scheme and Modeling Effects for Acoustic Wave Simulation. Revised

    NASA Technical Reports Server (NTRS)

    Kaushik, Dinesh K.; Baysal, Oktay

    1997-01-01

    Accurate computation of acoustic wave propagation may be more efficiently performed when their dispersion relations are considered. Consequently, computational algorithms which attempt to preserve these relations have been gaining popularity in recent years. In the present paper, the extensions to one such scheme are discussed. By solving the linearized, 2-D Euler and Navier-Stokes equations with such a method for the acoustic wave propagation, several issues were investigated. Among them were higher-order accuracy, choice of boundary conditions and differencing stencils, effects of viscosity, low-storage time integration, generalized curvilinear coordinates, periodic series, their reflections and interference patterns from a flat wall and scattering from a circular cylinder. The results were found to be promising en route to the aeroacoustic simulations of realistic engineering problems.

  20. Simulation of Anderson localization in a random fiber using a fast Fresnel diffraction algorithm

    NASA Astrophysics Data System (ADS)

    Davis, Jeffrey A.; Cottrell, Don M.

    2016-06-01

    Anderson localization has been previously demonstrated both theoretically and experimentally for transmission of a Gaussian beam through long distances in an optical fiber consisting of a random array of smaller fibers, each having either a higher or lower refractive index. However, the computational times were extremely long. We show how to simulate these results using a fast Fresnel diffraction algorithm. In each iteration of this approach, the light passes through a phase mask, undergoes Fresnel diffraction over a small distance, and then passes through the same phase mask. We also show results where we use a binary amplitude mask at the input that selectively illuminates either the higher or the lower index fibers. Additionally, we examine imaging of various sized objects through these fibers. In all cases, our results are consistent with other computational methods and experimental results, but with a much reduced computational time.