Science.gov

Sample records for algorithms simulated annealing

  1. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  2. Simulated annealing algorithm for optimal capital growth

    NASA Astrophysics Data System (ADS)

    Luo, Yong; Zhu, Bo; Tang, Yong

    2014-08-01

    We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.

  3. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.

    PubMed

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  4. Rayleigh wave inversion using heat-bath simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yongxu; Peng, Suping; Du, Wenfeng; Zhang, Xiaoyang; Ma, Zhenyuan; Lin, Peng

    2016-11-01

    The dispersion of Rayleigh waves can be used to obtain near-surface shear (S)-wave velocity profiles. This is performed mainly by inversion of the phase velocity dispersion curves, which has been proven to be a highly nonlinear and multimodal problem, and it is unsuitable to use local search methods (LSMs) as the inversion algorithm. In this study, a new strategy is proposed based on a variant of simulated annealing (SA) algorithm. SA, which simulates the annealing procedure of crystalline solids in nature, is one of the global search methods (GSMs). There are many variants of SA, most of which contain two steps: the perturbation of model and the Metropolis-criterion-based acceptance of the new model. In this paper we propose a one-step SA variant known as heat-bath SA. To test the performance of the heat-bath SA, two models are created. Both noise-free and noisy synthetic data are generated. Levenberg-Marquardt (LM) algorithm and a variant of SA, known as the fast simulated annealing (FSA) algorithm, are also adopted for comparison. The inverted results of the synthetic data show that the heat-bath SA algorithm is a reasonable choice for Rayleigh wave dispersion curve inversion. Finally, a real-world inversion example from a coal mine in northwestern China is shown, which proves that the scheme we propose is applicable.

  5. Combined Simulated Annealing Algorithm for the Discrete Facility Location Problem

    PubMed Central

    Qin, Jin; Ni, Ling-lin; Shi, Feng

    2012-01-01

    The combined simulated annealing (CSA) algorithm was developed for the discrete facility location problem (DFLP) in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it. PMID:23049474

  6. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.

    PubMed

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  7. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    PubMed Central

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  8. Application of Simulated Annealing and Related Algorithms to TWTA Design

    NASA Technical Reports Server (NTRS)

    Radke, Eric M.

    2004-01-01

    Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is

  9. Simulated annealing versus quantum annealing

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    Based on simulated classical annealing and simulated quantum annealing using quantum Monte Carlo (QMC) simulations I will explore the question where physical or simulated quantum annealers may outperform classical optimization algorithms. Although the stochastic dynamics of QMC simulations is not the same as the unitary dynamics of a quantum system, I will first show that for the problem of quantum tunneling between two local minima both QMC simulations and a physical system exhibit the same scaling of tunneling times with barrier height. The scaling in both cases is O (Δ2) , where Δ is the tunneling splitting. An important consequence is that QMC simulations can be used to predict the performance of a quantum annealer for tunneling through a barrier. Furthermore, by using open instead of periodic boundary conditions in imaginary time, equivalent to a projector QMC algorithm, one obtains a quadratic speedup for QMC, and achieve linear scaling in Δ. I will then address the apparent contradiction between experiments on a D-Wave 2 system that failed to see evidence of quantum speedup and previous QMC results that indicated an advantage of quantum annealing over classical annealing for spin glasses. We find that this contradiction is resolved by taking the continuous time limit in the QMC simulations which then agree with the experimentally observed behavior and show no speedup for 2D spin glasses. However, QMC simulations with large time steps gain further advantage: they ``cheat'' by ignoring what happens during a (large) time step, and can thus outperform both simulated quantum annealers and classical annealers. I will then address the question how to optimally run a simulated or physical quantum annealer. Investigating the behavior of the tails of the distribution of runtimes for very hard instances we find that adiabatically slow annealing is far from optimal. On the contrary, many repeated relatively fast annealing runs can be orders of magnitude faster for

  10. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  11. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.

    PubMed

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  12. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.

    PubMed

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm. PMID:24697395

  13. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    SciTech Connect

    Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  14. Atmospheric compensation in free space optical communication with simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Li, Zhaokun; Cao, Jingtai; Zhao, Xiaohui; Liu, Wei

    2015-03-01

    As we know that the conventional adaptive optics (AO) systems can compensate atmospheric turbulence in free space optical (FSO) communication system. Since in strong scintillation conditions, wave-front measurements based on phase-conjugation principle are undesired. A novel global optimization simulated annealing (SA) algorithm is proposed in this paper to compensate wave-front aberration. With global optimization characteristics, SA algorithm is better than stochastic parallel gradient descent (SPGD) and other algorithms that already exist. Related simulations are conducted and the results show that the SA algorithm can significantly improve performance in FSO communication system and is better than SPGD algorithm with the increase of coupling efficiency.

  15. Experiences with serial and parallel algorithms for channel routing using simulated annealing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1988-01-01

    Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.

  16. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    NASA Technical Reports Server (NTRS)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  17. A Simulated Annealing Algorithm for the Optimization of Multistage Depressed Collector Efficiency

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.; Wilson, Jeffrey D.; Bulson, Brian A.

    2002-01-01

    The microwave traveling wave tube amplifier (TWTA) is widely used as a high-power transmitting source for space and airborne communications. One critical factor in designing a TWTA is the overall efficiency. However, overall efficiency is highly dependent upon collector efficiency; so collector design is critical to the performance of a TWTA. Therefore, NASA Glenn Research Center has developed an optimization algorithm based on Simulated Annealing to quickly design highly efficient multi-stage depressed collectors (MDC).

  18. The Research on Web-Based Testing Environment Using Simulated Annealing Algorithm

    PubMed Central

    2014-01-01

    The computerized evaluation is now one of the most important methods to diagnose learning; with the application of artificial intelligence techniques in the field of evaluation, the computerized adaptive testing gradually becomes one of the most important evaluation methods. In this test, the computer dynamic updates the learner's ability level and selects tailored items from the item pool. In order to meet the needs of the test it requires that the system has a relatively high efficiency of the implementation. To solve this problem, we proposed a novel method of web-based testing environment based on simulated annealing algorithm. In the development of the system, through a series of experiments, we compared the simulated annealing method and other methods of the efficiency and efficacy. The experimental results show that this method ensures choosing nearly optimal items from the item bank for learners, meeting a variety of assessment needs, being reliable, and having valid judgment in the ability of learners. In addition, using simulated annealing algorithm to solve the computing complexity of the system greatly improves the efficiency of select items from system and near-optimal solutions. PMID:24959600

  19. Broadband diffusion metasurface based on a single anisotropic element and optimized by the Simulated Annealing algorithm.

    PubMed

    Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei

    2016-01-01

    We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the "0" and "1" elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements. PMID:27034110

  20. Broadband diffusion metasurface based on a single anisotropic element and optimized by the Simulated Annealing algorithm

    PubMed Central

    Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei

    2016-01-01

    We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the “0” and “1” elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements. PMID:27034110

  1. Comparison between simulated annealing algorithms and rapid chain delineation in the construction of genetic maps

    PubMed Central

    2010-01-01

    The efficiency of simulated annealing algorithms and rapid chain delineation in establishing the best linkage order, when constructing genetic maps, was evaluated. Linkage refers to the phenomenon by which two or more genes, or even more molecular markers, can be present in the same chromosome or linkage group. In order to evaluate the capacity of algorithms, four F2 co-dominant populations, 50, 100, 200 and 1000 in size, were simulated. For each population, a genome with four linkage groups (100 cM) was generated. The linkage groups possessed 51, 21, 11 and 6 marks, respectively, and a corresponding distance of 2, 5, 10 and 20 cM between adjacent marks, thereby causing various degrees of saturation. For very saturated groups, with an adjacent distance between marks of 2 cM and in greater number, i.e., 51, the method based upon stochastic simulation by simulated annealing presented orders with distances equivalent to or lower than rapid chain delineation. Otherwise, the two methods were commensurate through presenting the same SARF distance. PMID:21637501

  2. Comparison between simulated annealing algorithms and rapid chain delineation in the construction of genetic maps.

    PubMed

    Nascimento, Moysés; Cruz, Cosme Damião; Peternelli, Luiz Alexandre; Campana, Ana Carolina Mota

    2010-04-01

    The efficiency of simulated annealing algorithms and rapid chain delineation in establishing the best linkage order, when constructing genetic maps, was evaluated. Linkage refers to the phenomenon by which two or more genes, or even more molecular markers, can be present in the same chromosome or linkage group. In order to evaluate the capacity of algorithms, four F(2) co-dominant populations, 50, 100, 200 and 1000 in size, were simulated. For each population, a genome with four linkage groups (100 cM) was generated. The linkage groups possessed 51, 21, 11 and 6 marks, respectively, and a corresponding distance of 2, 5, 10 and 20 cM between adjacent marks, thereby causing various degrees of saturation. For very saturated groups, with an adjacent distance between marks of 2 cM and in greater number, i.e., 51, the method based upon stochastic simulation by simulated annealing presented orders with distances equivalent to or lower than rapid chain delineation. Otherwise, the two methods were commensurate through presenting the same SARF distance.

  3. Simulated annealing algorithm for solving chambering student-case assignment problem

    NASA Astrophysics Data System (ADS)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  4. A permutation based simulated annealing algorithm to predict pseudoknotted RNA secondary structures.

    PubMed

    Tsang, Herbert H; Wiese, Kay C

    2015-01-01

    Pseudoknots are RNA tertiary structures which perform essential biological functions. This paper discusses SARNA-Predict-pk, a RNA pseudoknotted secondary structure prediction algorithm based on Simulated Annealing (SA). The research presented here extends previous work of SARNA-Predict and further examines the effect of the new algorithm to include prediction of RNA secondary structure with pseudoknots. An evaluation of the performance of SARNA-Predict-pk in terms of prediction accuracy is made via comparison with several state-of-the-art prediction algorithms using 20 individual known structures from seven RNA classes. We measured the sensitivity and specificity of nine prediction algorithms. Three of these are dynamic programming algorithms: Pseudoknot (pknotsRE), NUPACK, and pknotsRG-mfe. One is using the statistical clustering approach: Sfold and the other five are heuristic algorithms: SARNA-Predict-pk, ILM, STAR, IPknot and HotKnots algorithms. The results presented in this paper demonstrate that SARNA-Predict-pk can out-perform other state-of-the-art algorithms in terms of prediction accuracy. This supports the use of the proposed method on pseudoknotted RNA secondary structure prediction of other known structures. PMID:26558299

  5. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  6. Reconstruction of the vertical electron density profile based on vertical TEC using the simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Chunhua; Yang, Guobin; Zhu, Peng; Nishioka, Michi; Yokoyama, Tatsuhiro; Zhou, Chen; Song, Huan; Lan, Ting; Zhao, Zhengyu; Zhang, Yuannong

    2016-05-01

    This paper presents a new method to reconstruct the vertical electron density profile based on vertical Total Electron Content (TEC) using the simulated annealing algorithm. The present technique used the Quasi-parabolic segments (QPS) to model the bottomside ionosphere. The initial parameters of the ionosphere model were determined from both International Reference Ionosphere (IRI) (Bilitza et al., 2014) and vertical TEC (vTEC). Then, the simulated annealing algorithm was used to search the best-fit parameters of the ionosphere model by comparing with the GPS-TEC. The performance and robust of this technique were verified by ionosonde data. The critical frequency (foF2) and peak height (hmF2) of the F2 layer obtained from ionograms recorded at different locations and on different days were compared with those calculated by the proposed method. The analysis of results shows that the present method is inspiring for obtaining foF2 from vTEC. However, the accuracy of hmF2 needs to be improved in the future work.

  7. Solution of the optimal plant location and sizing problem using simulated annealing and genetic algorithms

    SciTech Connect

    Rao, R.; Buescher, K.L.; Hanagandi, V.

    1995-12-31

    In the optimal plant location and sizing problem it is desired to optimize cost function involving plant sizes, locations, and production schedules in the face of supply-demand and plant capacity constraints. We will use simulated annealing (SA) and a genetic algorithm (GA) to solve this problem. We will compare these techniques with respect to computational expenses, constraint handling capabilities, and the quality of the solution obtained in general. Simulated Annealing is a combinatorial stochastic optimization technique which has been shown to be effective in obtaining fast suboptimal solutions for computationally, hard problems. The technique is especially attractive since solutions are obtained in polynomial time for problems where an exhaustive search for the global optimum would require exponential time. We propose a synergy between the cluster analysis technique, popular in classical stochastic global optimization, and the GA to accomplish global optimization. This synergy minimizes redundant searches around local optima and enhances the capable it of the GA to explore new areas in the search space.

  8. PedMine – A simulated annealing algorithm to identify maximally unrelated individuals in population isolates

    PubMed Central

    Douglas, Julie A.; Sandefur, Conner I.

    2010-01-01

    Summary In family-based genetic studies, it is often useful to identify a subset of unrelated individuals. When such studies are conducted in population isolates, however, most if not all individuals are often detectably related to each other. To identify a set of maximally unrelated (or equivalently, minimally related) individuals, we have implemented simulated annealing, a general-purpose algorithm for solving difficult combinatorial optimization problems. We illustrate our method on data from a genetic study in the Old Order Amish of Lancaster County, Pennsylvania, a population isolate derived from a modest number of founders. Given one or more pedigrees, our program automatically and rapidly extracts a fixed number of maximally unrelated individuals. PMID:18321883

  9. [The utility boiler low NOx combustion optimization based on ANN and simulated annealing algorithm].

    PubMed

    Zhou, Hao; Qian, Xinping; Zheng, Ligang; Weng, Anxin; Cen, Kefa

    2003-11-01

    With the developing restrict environmental protection demand, more attention was paid on the low NOx combustion optimizing technology for its cheap and easy property. In this work, field experiments on the NOx emissions characteristics of a 600 MW coal-fired boiler were carried out, on the base of the artificial neural network (ANN) modeling, the simulated annealing (SA) algorithm was employed to optimize the boiler combustion to achieve a low NOx emissions concentration, and the combustion scheme was obtained. Two sets of SA parameters were adopted to find a better SA scheme, the result show that the parameters of T0 = 50 K, alpha = 0.6 can lead to a better optimizing process. This work can give the foundation of the boiler low NOx combustion on-line control technology.

  10. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  11. A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.

    PubMed

    Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel

    2015-03-01

    Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions. PMID:25056743

  12. A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.

    PubMed

    Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel

    2015-03-01

    Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions.

  13. Structural optimization and segregation behavior of quaternary alloy nanoparticles based on simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Xin-Ze, Lu; Gui-Fang, Shao; Liang-You, Xu; Tun-Dong, Liu; Yu-Hua, Wen

    2016-05-01

    Alloy nanoparticles exhibit higher catalytic activity than monometallic nanoparticles, and their stable structures are of importance to their applications. We employ the simulated annealing algorithm to systematically explore the stable structure and segregation behavior of tetrahexahedral Pt–Pd–Cu–Au quaternary alloy nanoparticles. Three alloy nanoparticles consisting of 443 atoms, 1417 atoms, and 3285 atoms are considered and compared. The preferred positions of atoms in the nanoparticles are analyzed. The simulation results reveal that Cu and Au atoms tend to occupy the surface, Pt atoms preferentially occupy the middle layers, and Pd atoms tend to segregate to the inner layers. Furthermore, Au atoms present stronger surface segregation than Cu ones. This study provides a fundamental understanding on the structural features and segregation phenomena of multi-metallic nanoparticles. Project supported by the National Natural Science Foundation of China (Grant Nos. 51271156, 11474234, and 61403318) and the Natural Science Foundation of Fujian Province of China (Grant Nos. 2013J01255 and 2013J06002).

  14. QSAR modeling for quinoxaline derivatives using genetic algorithm and simulated annealing based feature selection.

    PubMed

    Ghosh, P; Bagchi, M C

    2009-01-01

    With a view to the rational design of selective quinoxaline derivatives, 2D and 3D-QSAR models have been developed for the prediction of anti-tubercular activities. Successful implementation of a predictive QSAR model largely depends on the selection of a preferred set of molecular descriptors that can signify the chemico-biological interaction. Genetic algorithm (GA) and simulated annealing (SA) are applied as variable selection methods for model development. 2D-QSAR modeling using GA or SA based partial least squares (GA-PLS and SA-PLS) methods identified some important topological and electrostatic descriptors as important factor for tubercular activity. Kohonen network and counter propagation artificial neural network (CP-ANN) considering GA and SA based feature selection methods have been applied for such QSAR modeling of Quinoxaline compounds. Out of a variable pool of 380 molecular descriptors, predictive QSAR models are developed for the training set and validated on the test set compounds and a comparative study of the relative effectiveness of linear and non-linear approaches has been investigated. Further analysis using 3D-QSAR technique identifies two models obtained by GA-PLS and SA-PLS methods leading to anti-tubercular activity prediction. The influences of steric and electrostatic field effects generated by the contribution plots are discussed. The results indicate that SA is a very effective variable selection approach for such 3D-QSAR modeling.

  15. Adaptive MANET multipath routing algorithm based on the simulated annealing approach.

    PubMed

    Kim, Sungwook

    2014-01-01

    Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.

  16. GenAnneal: Genetically modified Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Lagaris, Isaac E.

    2006-05-01

    A modification of the standard Simulated Annealing (SA) algorithm is presented for finding the global minimum of a continuous multidimensional, multimodal function. We report results of computational experiments with a set of test functions and we compare to methods of similar structure. The accompanying software accepts objective functions coded both in Fortran 77 and C++. Program summaryTitle of program:GenAnneal Catalogue identifier:ADXI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXI_v1_0 Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: The tool is designed to be portable in all systems running the GNU C++ compiler Installation: University of Ioannina, Greece on Linux based machines Programming language used:GNU-C++, GNU-C, GNU Fortran 77 Memory required to execute with typical data: 200 KB No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.:84 885 No. of lines in distributed program, including test data, etc.:14 896 Distribution format: tar.gz Nature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a non-linear system of equations via optimization, employing a "least squares" type of objective, one may encounter many local minima that do not correspond to solutions (i.e. they are far from zero). Typical running time: Depending on the objective function. Method of solution: We modified the process of step selection that the traditional Simulated

  17. A general Monte Carlo/simulated annealing algorithm for resonance assignment in NMR of uniformly labeled biopolymers

    PubMed Central

    Hu, Kan-Nian; Qiang, Wei; Tycko, Robert

    2011-01-01

    We describe a general computational approach to site-specific resonance assignments in multidimensional NMR studies of uniformly 15N,13C-labeled biopolymers, based on a simple Monte Carlo/simulated annealing (MCSA) algorithm contained in the program MCASSIGN2. Input to MCASSIGN2 includes lists of multidimensional signals in the NMR spectra with their possible residue-type assignments (which need not be unique), the biopolymer sequence, and a table that describes the connections that relate one signal list to another. As output, MCASSIGN2 produces a high-scoring sequential assignment of the multidimensional signals, using a score function that rewards good connections (i.e., agreement between relevant sets of chemical shifts in different signal lists) and penalizes bad connections, unassigned signals, and assignment gaps. Examination of a set of high-scoring assignments from a large number of independent runs allows one to determine whether a unique assignment exists for the entire sequence or parts thereof. We demonstrate the MCSA algorithm using two-dimensional (2D) and three-dimensional (3D) solid state NMR spectra of several model protein samples (α-spectrin SH3 domain and protein G/B1 microcrystals, HET-s218–289 fibrils), obtained with magic-angle spinning and standard polarization transfer techniques. The MCSA algorithm and MCASSIGN2 program can accommodate arbitrary combinations of NMR spectra with arbitrary dimensionality, and can therefore be applied in many areas of solid state and solution NMR. PMID:21710190

  18. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  19. Genetic Algorithm Based Simulated Annealing Method for Solving Unit Commitment Problem in Utility System

    NASA Astrophysics Data System (ADS)

    Rajan, C. Christober Asir

    2010-10-01

    The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.

  20. Experimental demonstration of a quantum annealing algorithm for the traveling salesman problem in a nuclear-magnetic-resonance quantum simulator

    SciTech Connect

    Chen Hongwei; Kong Xi; Qin Gan; Zhou Xianyi; Peng Xinhua; Du Jiangfeng; Chong Bo

    2011-03-15

    The method of quantum annealing (QA) is a promising way for solving many optimization problems in both classical and quantum information theory. The main advantage of this approach, compared with the gate model, is the robustness of the operations against errors originated from both external controls and the environment. In this work, we succeed in demonstrating experimentally an application of the method of QA to a simplified version of the traveling salesman problem by simulating the corresponding Schroedinger evolution with a NMR quantum simulator. The experimental results unambiguously yielded the optimal traveling route, in good agreement with the theoretical prediction.

  1. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  2. An Introduction to Simulated Annealing

    ERIC Educational Resources Information Center

    Albright, Brian

    2007-01-01

    An attempt to model the physical process of annealing lead to the development of a type of combinatorial optimization algorithm that takes on the problem of getting trapped in a local minimum. The author presents a Microsoft Excel spreadsheet that illustrates how this works.

  3. Alpha-plane based automatic general type-2 fuzzy clustering based on simulated annealing meta-heuristic algorithm for analyzing gene expression data.

    PubMed

    Doostparast Torshizi, Abolfazl; Fazel Zarandi, Mohammad Hossein

    2015-09-01

    This paper considers microarray gene expression data clustering using a novel two stage meta-heuristic algorithm based on the concept of α-planes in general type-2 fuzzy sets. The main aim of this research is to present a powerful data clustering approach capable of dealing with highly uncertain environments. In this regard, first, a new objective function using α-planes for general type-2 fuzzy c-means clustering algorithm is represented. Then, based on the philosophy of the meta-heuristic optimization framework 'Simulated Annealing', a two stage optimization algorithm is proposed. The first stage of the proposed approach is devoted to the annealing process accompanied by its proposed perturbation mechanisms. After termination of the first stage, its output is inserted to the second stage where it is checked with other possible local optima through a heuristic algorithm. The output of this stage is then re-entered to the first stage until no better solution is obtained. The proposed approach has been evaluated using several synthesized datasets and three microarray gene expression datasets. Extensive experiments demonstrate the capabilities of the proposed approach compared with some of the state-of-the-art techniques in the literature.

  4. Simulated annealing model of acupuncture

    NASA Astrophysics Data System (ADS)

    Shang, Charles; Szu, Harold

    2015-05-01

    The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.

  5. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    PubMed

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  6. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    PubMed Central

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  7. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    PubMed

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  8. A Monte Carlo/Simulated Annealing Algorithm for Sequential Resonance Assignment in Solid State NMR of Uniformly Labeled Proteins with Magic-Angle Spinning

    PubMed Central

    Tycko, Robert; Hu, Kan-Nian

    2010-01-01

    We describe a computational approach to sequential resonance assignment in solid state NMR studies of uniformly 15N,13C-labeled proteins with magic-angle spinning. As input, the algorithm uses only the protein sequence and lists of 15N/13Cα crosspeaks from 2D NCACX and NCOCX spectra that include possible residue-type assignments of each crosspeak. Assignment of crosspeaks to specific residues is carried out by a Monte Carlo/simulated annealing algorithm, implemented in the program MC_ASSIGN1. The algorithm tolerates substantial ambiguity in residue-type assignments and coexistence of visible and invisible segments in the protein sequence. We use MC_ASSIGN1 and our own 2D spectra to replicate and extend the sequential assignments for uniformly labeled HET-s(218-289) fibrils previously determined manually by Siemer et al. (J. Biomolec. NMR, vol. 34, pp. 75-87, 2006) from a more extensive set of 2D and 3D spectra. Accurate assignments by MC_ASSIGN1 do not require data that are of exceptionally high quality. Use of MC_ASSIGN1 (and its extensions to other types of 2D and 3D data) is likely to alleviate many of the difficulties and uncertainties associated with manual resonance assignments in solid state NMR studies of uniformly labeled proteins, where spectral resolution and signal-to-noise are often sub-optimal. PMID:20547467

  9. a New Multimodal Multi-Criteria Route Planning Model by Integrating a Fuzzy-Ahp Weighting Method and a Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Ghaderi, F.; Pahlavani, P.

    2015-12-01

    A multimodal multi-criteria route planning (MMRP) system provides an optimal multimodal route from an origin point to a destination point considering two or more criteria in a way this route can be a combination of public and private transportation modes. In this paper, the simulate annealing (SA) and the fuzzy analytical hierarchy process (fuzzy AHP) were combined in order to find this route. In this regard, firstly, the effective criteria that are significant for users in their trip were determined. Then the weight of each criterion was calculated using the fuzzy AHP weighting method. The most important characteristic of this weighting method is the use of fuzzy numbers that aids the users to consider their uncertainty in pairwise comparison of criteria. After determining the criteria weights, the proposed SA algorithm were used for determining an optimal route from an origin to a destination. One of the most important problems in a meta-heuristic algorithm is trapping in local minima. In this study, five transportation modes, including subway, bus rapid transit (BRT), taxi, walking, and bus were considered for moving between nodes. Also, the fare, the time, the user's bother, and the length of the path were considered as effective criteria for solving the problem. The proposed model was implemented in an area in centre of Tehran in a GUI MATLAB programming language. The results showed a high efficiency and speed of the proposed algorithm that support our analyses.

  10. Hybridisations Of Simulated Annealing And Modified Simplex Algorithms On A Path Of Steepest Ascent With Multi-Response For Optimal Parameter Settings Of ACO

    NASA Astrophysics Data System (ADS)

    Luangpaiboon, P.

    2009-10-01

    Many entrepreneurs face to extreme conditions for instances; costs, quality, sales and services. Moreover, technology has always been intertwined with our demands. Then almost manufacturers or assembling lines adopt it and come out with more complicated process inevitably. At this stage, products and service improvement need to be shifted from competitors with sustainability. So, a simulated process optimisation is an alternative way for solving huge and complex problems. Metaheuristics are sequential processes that perform exploration and exploitation in the solution space aiming to efficiently find near optimal solutions with natural intelligence as a source of inspiration. One of the most well-known metaheuristics is called Ant Colony Optimisation, ACO. This paper is conducted to give an aid in complicatedness of using ACO in terms of its parameters: number of iterations, ants and moves. Proper levels of these parameters are analysed on eight noisy continuous non-linear continuous response surfaces. Considering the solution space in a specified region, some surfaces contain global optimum and multiple local optimums and some are with a curved ridge. ACO parameters are determined through hybridisations of Modified Simplex and Simulated Annealing methods on the path of Steepest Ascent, SAM. SAM was introduced to recommend preferable levels of ACO parameters via statistically significant regression analysis and Taguchi's signal to noise ratio. Other performance achievements include minimax and mean squared error measures. A series of computational experiments using each algorithm were conducted. Experimental results were analysed in terms of mean, design points and best so far solutions. It was found that results obtained from a hybridisation with stochastic procedures of Simulated Annealing method were better than that using Modified Simplex algorithm. However, the average execution time of experimental runs and number of design points using hybridisations were

  11. The RBS data furnace: Simulated annealing

    NASA Astrophysics Data System (ADS)

    Barradas, N. P.; Marriott, P. K.; Jeynes, C.; Webb, R. P.

    1998-03-01

    A computer program was written which carries out an automatic analysis of Rutherford Backscattering (RBS) data with minimal human involvement. The inputs which are required are the system parameters (e.g. experimental geometry, energy calibration), and the elements present in the sample. Parameters such as the number of layers, layer thickness and layer composition, are determined automatically during the procedure. The global optimisation simulated annealing (SA) algorithm was used, due to its two main features: First, the solution is independent of the initial guess chosen, and therefore a human-input initial layer structure is not needed. Second, it tends asymptotically to the absolute global minimum rather than a local minimum as in conventional minimisation algorithms, and hence high quality solutions can be achieved.

  12. Parallel Simulated Annealing by Mixing of States

    NASA Astrophysics Data System (ADS)

    Chu, King-Wai; Deng, Yuefan; Reinitz, John

    1999-01-01

    We report the results of testing the performance of a new, efficient, and highly general-purpose parallel optimization method, based upon simulated annealing. This optimization algorithm was applied to analyze the network of interacting genes that control embryonic development and other fundamental biological processes. We found several sets of algorithmic parameters that lead to optimal parallel efficiency for up to 100 processors on distributed-memory MIMD architectures. Our strategy contains two major elements. First, we monitor and pool performance statistics obtained simultaneously on all processors. Second, we mix states at intervals to ensure a Boltzmann distribution of energies. The central scientific issue is the inverse problem, the determination of the parameters of a set of nonlinear ordinary differential equations by minimizing the total error between the model behavior and experimental observations.

  13. Applications of an MPI Enhanced Simulated Annealing Algorithm on nuSTORM and 6D Muon Cooling

    SciTech Connect

    Liu, A.

    2015-06-01

    The nuSTORM decay ring is a compact racetrack storage ring with a circumference ~480 m using large aperture ($\\phi$ = 60 cm) magnets. The design goal of the ring is to achieve a momentum acceptance of 3.8 $\\pm$10% GeV/c and a phase space acceptance of 2000 $\\mu$m·rad. The design has many challenges because the acceptance will be affected by many nonlinearity terms with large particle emittance and/or large momentum offset. In this paper, we present the application of a meta-heuristic optimization algorithm to the sextupole correction in the ring. The algorithm is capable of finding a balanced compromise among corrections of the nonlinearity terms, and finding the largest acceptance. This technique can be applied to the design of similar storage rings that store beams with wide transverse phase space and momentum spectra. We also present the recent study on the application of this algorithm to a part of the 6D muon cooling channel. The technique and the cooling concept will be applied to design a cooling channel for the extracted muon beam at nuSTORM in the future study.

  14. Comparing Monte Carlo methods for finding ground states of Ising spin glasses: Population annealing, simulated annealing, and parallel tempering.

    PubMed

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G

    2015-07-01

    Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.

  15. Averaging Tens to Hundreds of Icosahedral Particle Images to Resolve Protein Secondary Structure Elements using a Multi-Path Simulated Annealing Optimization Algorithm

    PubMed Central

    Liu, Xiangan; Jiang, Wen; Jakana, Joanita; Chiu, Wah

    2007-01-01

    Accurately determining a cryoEM particle’s alignment parameters is crucial to high resolution single particle 3-D reconstruction. We developed Multi-Path Simulated Annealing, a Monte Carlo type of optimization algorithm, for globally aligning the center and orientation of a particle simultaneously. A consistency criterion was developed to ensure the alignment parameters are correct and to remove some bad particles from a large pool of images of icosahedral particles. Without using any a priori model, this procedure is able to reconstruct a structure from a random initial model. Combining the procedure above with a new empirical double threshold particle selection method, we are able to pick tens of best quality particles to reconstruct a subnanometer resolution map from scratch. Using the best 62 particles of rice dwarf virus, the reconstruction reached 9.6Å resolution at which 4 helices of the P3A subunit of RDV are resolved. Furthermore, with the 284 best particles, the reconstruction is improved to 7.9Å resolution, and 21 of 22 helices and 6 of 7 beta sheets are resolved. PMID:17698370

  16. Quantum annealing speedup over simulated annealing on random Ising chains

    NASA Astrophysics Data System (ADS)

    Zanca, Tommaso; Santoro, Giuseppe E.

    2016-06-01

    We show clear evidence of a quadratic speedup of a quantum annealing (QA) Schrödinger dynamics over a Glauber master equation simulated annealing (SA) for a random Ising model in one dimension, via an equal-footing exact deterministic dynamics of the Jordan-Wigner fermionized problems. This is remarkable, in view of the arguments of H. G. Katzgraber et al. [Phys. Rev. X 4, 021008 (2014), 10.1103/PhysRevX.4.021008], since SA does not encounter any phase transition, while QA does. We also find a second remarkable result: that a "quantum-inspired" imaginary-time Schrödinger QA provides a further exponential speedup, i.e., an asymptotic residual error decreasing as a power law τ-μ of the annealing time τ .

  17. Classical Simulated Annealing Using Quantum Analogues

    NASA Astrophysics Data System (ADS)

    La Cour, Brian R.; Troupe, James E.; Mark, Hans M.

    2016-08-01

    In this paper we consider the use of certain classical analogues to quantum tunneling behavior to improve the performance of simulated annealing on a discrete spin system of the general Ising form. Specifically, we consider the use of multiple simultaneous spin flips at each annealing step as an analogue to quantum spin coherence as well as modifications of the Boltzmann acceptance probability to mimic quantum tunneling. We find that the use of multiple spin flips can indeed be advantageous under certain annealing schedules, but only for long anneal times.

  18. Remediation tradeoffs addressed with simulated annealing optimization

    SciTech Connect

    Rogers, L. L., LLNL

    1998-02-01

    Escalation of groundwater remediation costs has encouraged both advances in optimization techniques to balance remediation objectives and economics and development of innovative technologies to expedite source region clean-ups. We present an optimization application building on a pump-and-treat model, yet assuming a prior removal of different portions of the source area to address the evolving management issue of more aggressive source remediation. Separate economic estimates of in-situ thermal remediation are combined with the economic estimates of the subsequent optimal pump-and-treat remediation to observe tradeoff relationships of cost vs. highest remaining contamination levels (hot spot). The simulated annealing algorithm calls the flow and transport model to evaluate the success of a proposed remediation scenario at a U.S.A. Superfund site contaminated with volatile organic compounds (VOCs).

  19. Maneuver Optimization through Simulated Annealing

    NASA Astrophysics Data System (ADS)

    de Vries, W.

    2011-09-01

    We developed an efficient method for satellite maneuver optimization. It is based on a Monte Carlo (MC) approach in combination with Simulated Annealing. The former component enables us to consider all imaginable trajectories possible given the current satellite position and its available thrust, while the latter approach ensures that we reliably find the best global optimization solution. Furthermore, this optimization setup is eminently scalable. It runs efficiently on the current multi-core generation of desktop computers, but is equally at home on massively parallel high performance computers (HPC). The baseline method for desktops uses a modified two-body propagator that includes the lunar gravitational force, and corrects for nodal and apsidal precession. For the HPC environment, on the other hand, we can include all the necessary components for a full force-model propagation: higher gravitational moments, atmospheric drag, solar radiation pressure, etc. A typical optimization scenario involves an initial orbit and a destination orbit / trajectory, a time period under consideration, and an available amount of thrust. After selecting a particular optimization (e.g., least amount of fuel, shortest maneuver), the program will determine when and in what direction to burn by what amount. Since we are considering all possible trajectories, we are not constrained to any particular transfer method (e.g., Hohmann transfers). Indeed, in some cases gravitational slingshots around the Earth turn out to be the best result. The paper will describe our approach in detail, its complement of optimizations for single- and multi-burn sequences, and some in-depth examples. In particular, we highlight an example where it is used to analyze a sequence of maneuvers after the fact, as well as showcase its utility as a planning and analysis tool for future maneuvers.

  20. Application of Simulated Annealing to Clustering Tuples in Databases.

    ERIC Educational Resources Information Center

    Bell, D. A.; And Others

    1990-01-01

    Investigates the value of applying principles derived from simulated annealing to clustering tuples in database design, and compares this technique with a graph-collapsing clustering method. It is concluded that, while the new method does give superior results, the expense involved in algorithm run time is prohibitive. (24 references) (CLB)

  1. On simulated annealing phase transitions in phylogeny reconstruction.

    PubMed

    Strobl, Maximilian A R; Barker, Daniel

    2016-08-01

    Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry.

  2. Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction

    SciTech Connect

    Ryu, Seun; Lin, Guang; Sun, Xin; Khaleel, Mohammad A.; Li, Dongsheng

    2013-04-01

    Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.

  3. Stochastic annealing simulation of cascades in metals

    SciTech Connect

    Heinisch, H.L.

    1996-04-01

    The stochastic annealing simulation code ALSOME is used to investigate quantitatively the differential production of mobile vacancy and SIA defects as a function of temperature for isolated 25 KeV cascades in copper generated by MD simulations. The ALSOME code and cascade annealing simulations are described. The annealing simulations indicate that the above Stage V, where the cascade vacancy clusters are unstable,m nearly 80% of the post-quench vacancies escape the cascade volume, while about half of the post-quench SIAs remain in clusters. The results are sensitive to the relative fractions of SIAs that occur in small, highly mobile clusters and large stable clusters, respectively, which may be dependent on the cascade energy.

  4. Estimation of the parameters of ETAS models by Simulated Annealing.

    PubMed

    Lombardi, Anna Maria

    2015-02-12

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  5. Estimation of the parameters of ETAS models by Simulated Annealing

    PubMed Central

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036

  6. Simulated annealing approach to the max cut problem

    NASA Astrophysics Data System (ADS)

    Sen, Sandip

    1993-03-01

    In this paper we address the problem of partitioning the nodes of a random graph into two sets, so as to maximize the sum of the weights on the edges connecting nodes belonging to different sets. This problem has important real-life counterparts, but has been proven to be NP-complete. As such, a number of heuristic solution techniques have been proposed in literature to address this problem. We propose a stochastic optimization technique, simulated annealing, to find solutions for the max cut problem. Our experiments verify that good solutions to the problem can be found using this algorithm in a reasonable amount of time.

  7. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    SciTech Connect

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.

  8. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.

    PubMed

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data. PMID:26958442

  9. An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.

    PubMed

    Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin

    2016-06-30

    Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.

  10. Improving Simulated Annealing by Recasting it as a Non-Cooperative Game

    NASA Technical Reports Server (NTRS)

    Wolpert, David; Bandari, Esfandiar; Tumer, Kagan

    2001-01-01

    The game-theoretic field of COllective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved "as a side-effect". Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed game-theory-motivated algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting improves simulated annealing by several orders of magnitude for spin glass relaxation and bin-packing.

  11. Improving Simulated Annealing by Replacing Its Variables with Game-Theoretic Utility Maximizers

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Bandari, Esfandiar; Tumer, Kagan

    2001-01-01

    The game-theory field of Collective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved as a side-effect. Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting significantly improves simulated annealing for a model of an economic process run over an underlying small-worlds topology. Furthermore, these experiments reveal novel small-worlds phenomena, and highlight the shortcomings of conventional mechanism design in bounded rationality domains.

  12. Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing

    PubMed Central

    Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud

    2015-01-01

    This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, “MOPSOSA”. The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets. PMID:26132309

  13. Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing.

    PubMed

    Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud

    2015-01-01

    This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, "MOPSOSA". The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets.

  14. spsann - optimization of sample patterns using spatial simulated annealing

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  15. Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias

    2015-01-01

    Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.

  16. Reconstruction of bremsstrahlung spectra from attenuation data using generalized simulated annealing.

    PubMed

    Menin, O H; Martinez, A S; Costa, A M

    2016-05-01

    A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present.

  17. Reconstruction of bremsstrahlung spectra from attenuation data using generalized simulated annealing.

    PubMed

    Menin, O H; Martinez, A S; Costa, A M

    2016-05-01

    A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present. PMID:26943902

  18. A hybrid hopfield network-simulated annealing approach for frequency assignment in satellite communications systems.

    PubMed

    Salcedo-Sanz, Sancho; Santiago-Mozos, Ricardo; Bousoño-Calzón, Carlos

    2004-04-01

    A hybrid Hopfield network-simulated annealing algorithm (HopSA) is presented for the frequency assignment problem (FAP) in satellite communications. The goal of this NP-complete problem is minimizing the cochannel interference between satellite communication systems by rearranging the frequency assignment, for the systems can accommodate the increasing demands. The HopSA algorithm consists of a fast digital Hopfield neural network which manages the problem constraints hybridized with a simulated annealing which improves the quality of the solutions obtained. We analyze the problem and its formulation, describing and discussing the HopSA algorithm and solving a set of benchmark problems. The results obtained are compared with other existing approaches in order to show the performance of the HopSA approach.

  19. Birefringence simulation of annealed ingot of calcium fluoride single crystal

    NASA Astrophysics Data System (ADS)

    Ogino, H.; Miyazaki, N.; Mabuchi, T.; Nawata, T.

    2008-01-01

    We developed a method for simulating birefringence of an annealed ingot of calcium fluoride single crystal caused by the residual stress after annealing process. The method comprises the heat conduction analysis that provides the temperature distribution during the ingot annealing, the elastic thermal stress analysis using the assumption of the stress-free temperature that provides the residual stress after annealing, and the birefringence analysis of an annealed ingot induced by the residual stress. The finite element method was applied to the heat conduction analysis and the elastic thermal stress analysis. In these analyses, the temperature dependence of material properties and the crystal anisotropy were taken into account. In the birefringence analysis, the photoelastic effect gives the change of refractive indices, from which the optical path difference in the annealed ingot is calculated by the Jones calculus. The relation between the Jones calculus and the approximate method using the stress components averaged along the optical path is discussed theoretically. It is found that the result of the approximate method agrees very well with that of the Jones calculus in birefringence analysis. The distribution pattern of the optical path difference in the annealed ingot obtained from the present birefringence calculation methods agrees reasonably well with that of the experiment. The calculated values also agree reasonably well with those of the experiment, when a stress-free temperature is adequately selected.

  20. A Simulated Annealing Procedure for the Joint Inversion of Spectroscopic and Compositional Data.

    NASA Astrophysics Data System (ADS)

    Seelos, F. P.; Arvidson, R. E.

    2001-12-01

    A simulated annealing algorithm capable of inverting thermal emission spectra and compositional data acquired from a common geologic target has been developed. The inversion allows for the identification and proportion estimation of low concentration mineral endmembers. This method will be especially applicable to the 2007 Mars Mobile Geobiology Explorer equipped with an emission spectrometer for mineralogical analyses and a Laser Induced Breakdown Spectrometer for the remote acquisition of elemental information. The coupled inversion is cast as a multidimensional minimization problem where the hyperspace volume to be investigated is defined by the library endmembers at the disposal of the algorithm. This is a vector space in which exists all possible combinations of the library endmembers, with the endmember suite serving as an orthogonal set of basis vectors that span the hyperspace. The goal of the minimization is to locate the hyperspace coordinate that has the lowest associated model error value. This will correspond to the best possible model composition and mineralogy that can be generated by linearly mixing members of the endmember mineral suite. As opposed to standard unmixing routines, the simulated annealing algorithm is flexible enough to minimize any type of model error function. This allows the algorithm to interpret elemental analyses at any level of rigor, including elemental presence, relative abundances, abundance ratios, or exact mole percent. A synthetic data set was developed and systematically degraded with noise of various form and magnitude prior to being inverted with the simulated annealing algorithm as well as two purely spectral unmixing procedures. The simulated annealing procedure outperformed both of the alternate algorithms with an overall factor of two improvement in the mean sum of squares of deviations in the solution parameters. The detailed results from synthetic data inversions as well as the analysis of laboratory data will be

  1. Quantum versus simulated annealing in wireless interference network optimization.

    PubMed

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-01-01

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking-more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed. PMID:27181056

  2. Quantum versus simulated annealing in wireless interference network optimization

    PubMed Central

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-01-01

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed. PMID:27181056

  3. Quantum versus simulated annealing in wireless interference network optimization.

    PubMed

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-01-01

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking-more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.

  4. Molecular dynamic simulation of non-melt laser annealing process

    NASA Astrophysics Data System (ADS)

    Liren, Yan; Dai, Li; Wei, Zhang; Zhihong, Liu; Wei, Zhou; Quan, Wang

    2016-03-01

    Molecular dynamic simulation is performed to study the process of material annealing caused by a 266 nm pulsed laser. A micro-mechanism describing behaviors of silicon and impurity atoms during the laser annealing at a non-melt regime is proposed. After ion implantation, the surface of the Si wafer is acted by a high energy laser pulse, which loosens the material and partially frees both Si and impurity atoms. While the residual laser energy is absorbed by valence electrons, these atoms are recoiled and relocated to finally form a crystal. Energy-related movement behavior is observed by using the molecular dynamic method. The non-melt laser anneal appears to be quite sensitive to the energy density of the laser, as a small excess energy may causes a significant impurity diffusion. Such a result is also supported by our laser anneal experiment.

  5. Neutronic optimization in high conversion Th-{sup 233}U fuel assembly with simulated annealing

    SciTech Connect

    Kotlyar, D.; Shwageraus, E.

    2012-07-01

    This paper reports on fuel design optimization of a PWR operating in a self sustainable Th-{sup 233}U fuel cycle. Monte Carlo simulated annealing method was used in order to identify the fuel assembly configuration with the most attractive breeding performance. In previous studies, it was shown that breeding may be achieved by employing heterogeneous Seed-Blanket fuel geometry. The arrangement of seed and blanket pins within the assemblies may be determined by varying the designed parameters based on basic reactor physics phenomena which affect breeding. However, the amount of free parameters may still prove to be prohibitively large in order to systematically explore the design space for optimal solution. Therefore, the Monte Carlo annealing algorithm for neutronic optimization is applied in order to identify the most favorable design. The objective of simulated annealing optimization is to find a set of design parameters, which maximizes some given performance function (such as relative period of net breeding) under specified constraints (such as fuel cycle length). The first objective of the study was to demonstrate that the simulated annealing optimization algorithm will lead to the same fuel pins arrangement as was obtained in the previous studies which used only basic physics phenomena as guidance for optimization. In the second part of this work, the simulated annealing method was used to optimize fuel pins arrangement in much larger fuel assembly, where the basic physics intuition does not yield clearly optimal configuration. The simulated annealing method was found to be very efficient in selecting the optimal design in both cases. In the future, this method will be used for optimization of fuel assembly design with larger number of free parameters in order to determine the most favorable trade-off between the breeding performance and core average power density. (authors)

  6. Coordination Hydrothermal Interconnection Java-Bali Using Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Wicaksono, B.; Abdullah, A. G.; Saputra, W. S.

    2016-04-01

    Hydrothermal power plant coordination aims to minimize the total cost of operating system that is represented by fuel costand constraints during optimization. To perform the optimization, there are several methods that can be used. Simulated Annealing (SA) is a method that can be used to solve the optimization problems. This method was inspired by annealing or cooling process in the manufacture of materials composed of crystals. The basic principle of hydrothermal power plant coordination includes the use of hydro power plants to support basic load while thermal power plants were used to support the remaining load. This study used two hydro power plant units and six thermal power plant units with 25 buses by calculating transmission losses and considering power limits in each power plant unit aided by MATLAB software during the process. Hydrothermal power plant coordination using simulated annealing plants showed that a total cost of generation for 24 hours is 13,288,508.01.

  7. Multiphase Simulated Annealing Based on Boltzmann and Bose-Einstein Distribution Applied to Protein Folding Problem.

    PubMed

    Frausto-Solis, Juan; Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J Javier; González-Flores, Carlos; Castilla-Valdez, Guadalupe

    2016-01-01

    A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.

  8. Multiphase Simulated Annealing Based on Boltzmann and Bose-Einstein Distribution Applied to Protein Folding Problem

    PubMed Central

    Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J. Javier; González-Flores, Carlos

    2016-01-01

    A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA. PMID:27413369

  9. Multiphase Simulated Annealing Based on Boltzmann and Bose-Einstein Distribution Applied to Protein Folding Problem.

    PubMed

    Frausto-Solis, Juan; Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J Javier; González-Flores, Carlos; Castilla-Valdez, Guadalupe

    2016-01-01

    A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA. PMID:27413369

  10. An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities

    PubMed Central

    Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin

    2016-01-01

    Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289

  11. An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.

    PubMed

    Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin

    2016-01-01

    Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289

  12. Metric optimisation for analogue forecasting by simulated annealing

    NASA Astrophysics Data System (ADS)

    Bliefernicht, J.; Bárdossy, A.

    2009-04-01

    It is well known that weather patterns tend to recur from time to time. This property of the atmosphere is used by analogue forecasting techniques. They have a long history in weather forecasting and there are many applications predicting hydrological variables at the local scale for different lead times. The basic idea of the technique is to identify past weather situations which are similar (analogue) to the predicted one and to take the local conditions of the analogues as forecast. But the forecast performance of the analogue method depends on user-defined criteria like the choice of the distance function and the size of the predictor domain. In this study we propose a new methodology of optimising both criteria by minimising the forecast error with simulated annealing. The performance of the methodology is demonstrated for the probability forecast of daily areal precipitation. It is compared with a traditional analogue forecasting algorithm, which is used operational as an element of a hydrological forecasting system. The study is performed for several meso-scale catchments located in the Rhine basin in Germany. The methodology is validated by a jack-knife method in a perfect prognosis framework for a period of 48 years (1958-2005). The predictor variables are derived from the NCEP/NCAR reanalysis data set. The Brier skill score and the economic value are determined to evaluate the forecast skill and value of the technique. In this presentation we will present the concept of the optimisation algorithm and the outcome of the comparison. It will be also demonstrated how a decision maker should apply a probability forecast to maximise the economic benefit from it.

  13. Molecular dynamics simulation of annealed ZnO surfaces

    SciTech Connect

    Min, Tjun Kit; Yoon, Tiem Leong; Lim, Thong Leng

    2015-04-24

    The effect of thermally annealing a slab of wurtzite ZnO, terminated by two surfaces, (0001) (which is oxygen-terminated) and (0001{sup ¯}) (which is Zn-terminated), is investigated via molecular dynamics simulation by using reactive force field (ReaxFF). We found that upon heating beyond a threshold temperature of ∼700 K, surface oxygen atoms begin to sublimate from the (0001) surface. The ratio of oxygen leaving the surface at a given temperature increases as the heating temperature increases. A range of phenomena occurring at the atomic level on the (0001) surface has also been explored, such as formation of oxygen dimers on the surface and evolution of partial charge distribution in the slab during the annealing process. It was found that the partial charge distribution as a function of the depth from the surface undergoes a qualitative change when the annealing temperature is above the threshold temperature.

  14. Emergence of species in evolutionary "simulated annealing".

    PubMed

    Heo, Muyoung; Kang, Louis; Shakhnovich, Eugene I

    2009-02-10

    Which factors govern the evolution of mutation rates and emergence of species? Here, we address this question by using a first principles model of life where population dynamics of asexual organisms is coupled to molecular properties and interactions of proteins encoded in their genomes. Simulating evolution of populations, we found that fitness increases in punctuated steps via epistatic events, leading to formation of stable and functionally interacting proteins. At low mutation rates, species form populations of organisms tightly localized in sequence space, whereas at higher mutation rates, species are lost without an apparent loss of fitness. However, when mutation rate was a selectable trait, the population initially maintained high mutation rate until a high fitness level was reached, after which organisms with low mutation rates are gradually selected, with the population eventually reaching mutation rates comparable with those of modern DNA-based organisms. This study shows that the fitness landscape of a biophysically realistic system is extremely complex, with huge number of local peaks rendering adaptation dynamics to be a glass-like process. On a more practical level, our results provide a rationale to experimental observations of the effect of mutation rate on fitness of populations of asexual organisms.

  15. Total lineshape analysis of high-resolution NMR spectra powered by simulated annealing

    NASA Astrophysics Data System (ADS)

    Cheshkov, D. A.; Sinitsyn, D. O.; Sheberstov, K. F.; Chertkov, V. A.

    2016-11-01

    The novel algorithm for a total lineshape analysis of high-resolution NMR spectra has been developed. A global optimization by simulated annealing has been applied that has allowed to overcome the main trouble of common approaches which had frequently returned solutions for local minima rather than for global ones. The algorithm has been verified for the four-spin test systems ABCD, and has been successfully used for analysis of experimental NMR spectra of proline. The approach has allowed to avoid a sophisticated manual setup of initial parameters and to conduct the analysis of complicated high-resolution NMR spectra nearly automatically.

  16. Simulated-quantum-annealing comparison between all-to-all connectivity schemes

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Vinci, Walter; Lidar, Daniel A.

    2016-08-01

    Quantum annealing aims to exploit quantum mechanics to speed up the search for the solution to optimization problems. Most problems exhibit complete connectivity between the logical spin variables after they are mapped to the Ising spin Hamiltonian of quantum annealing. To account for hardware constraints of current and future physical quantum annealers, methods enabling the embedding of fully connected graphs of logical spins into a constant-degree graph of physical spins are therefore essential. Here, we compare the recently proposed embedding scheme for quantum annealing with all-to-all connectivity by Lechner, Hauke, and Zoller (LHZ) [Sci. Adv. 1, e1500838 (2015), 10.1126/sciadv.1500838] to the commonly used minor embedding (ME) scheme. Using both simulated quantum annealing and parallel tempering simulations, we find that for a set of instances randomly chosen from a class of fully connected, random Ising problems, the ME scheme outperforms the LHZ scheme when using identical simulation parameters, despite the fault tolerance of the latter to weakly correlated spin-flip noise. This result persists even after we introduce several decoding strategies for the LHZ scheme, including a minimum-weight decoding algorithm that results in substantially improved performance over the original LHZ scheme. We explain the better performance of the ME scheme in terms of more efficient spin updates, which allows it to better tolerate the correlated spin-flip errors that arise in our model of quantum annealing. Our results leave open the question of whether the performance of the two embedding schemes can be improved using scheme-specific parameters and new error correction approaches.

  17. Multigrid hierarchical simulated annealing method for reconstructing heterogeneous media

    NASA Astrophysics Data System (ADS)

    Pant, Lalit M.; Mitra, Sushanta K.; Secanell, Marc

    2015-12-01

    A reconstruction methodology based on different-phase-neighbor (DPN) pixel swapping and multigrid hierarchical annealing is presented. The method performs reconstructions by starting at a coarse image and successively refining it. The DPN information is used at each refinement stage to freeze interior pixels of preformed structures. This preserves the large-scale structures in refined images and also reduces the number of pixels to be swapped, thereby resulting in a decrease in the necessary computational time to reach a solution. Compared to conventional single-grid simulated annealing, this method was found to reduce the required computation time to achieve a reconstruction by around a factor of 70-90, with the potential of even higher speedups for larger reconstructions. The method is able to perform medium sized (up to 3003 voxels) three-dimensional reconstructions with multiple correlation functions in 36-47 h.

  18. Stochastic annealing simulations of defect interactions among subcascades

    SciTech Connect

    Heinisch, H.L.; Singh, B.N.

    1997-04-01

    The effects of the subcascade structure of high energy cascades on the temperature dependencies of annihilation, clustering and free defect production are investigated. The subcascade structure is simulated by closely spaced groups of lower energy MD cascades. The simulation results illustrate the strong influence of the defect configuration existing in the primary damage state on subsequent intracascade evolution. Other significant factors affecting the evolution of the defect distribution are the large differences in mobility and stability of vacancy and interstitial defects and the rapid one-dimensional diffusion of small, glissile interstitial loops produced directly in cascades. Annealing simulations are also performed on high-energy, subcascade-producing cascades generated with the binary collision approximation and calibrated to MD results.

  19. High-dose-rate prostate brachytherapy inverse planning on dose-volume criteria by simulated annealing

    NASA Astrophysics Data System (ADS)

    Deist, T. M.; Gorissen, B. L.

    2016-02-01

    High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.

  20. High-dose-rate prostate brachytherapy inverse planning on dose-volume criteria by simulated annealing.

    PubMed

    Deist, T M; Gorissen, B L

    2016-02-01

    High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data. PMID:26760757

  1. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  2. Optimization of blade arrangement in a randomly mistuned cascade using simulated annealing

    NASA Technical Reports Server (NTRS)

    Thompson, Edward A.; Becus, Georges A.

    1993-01-01

    This paper presents preliminary results of an investigation on mistuning of bladed-disk assemblies aimed at capturing the benefits of mistuning on stability, while at the same time, minimizing the adverse effects on response by solving the following problem: given a set of N turbine blades, each being a small random perturbation of the same nominal blade, determine the best arrangement of the N blades in a mistuned cascade with regard to aeroelastic response. In the studies reported here, mistuning of the blades is restricted to small differences in torsional stiffness. The large combinatorial optimization problem of seeking the best arrangement by blade exchanges is solved using a simulated annealing algorithm.

  3. Simulated annealing and stochastic learning in optical neural nets: An optical Boltzmann machine

    SciTech Connect

    Shae, Zonyin.

    1989-01-01

    This dissertation deals with the study of stochastic learning and neural computation in opto-electronic hardware. It presents the first demonstration of a fully operational optical learning machine. Learning in the machine is stochastic taking place in a self-organized multi-layered opto-electronic neural net with plastic connectivity weights that are formed in a programmable non-volatile spatial light modulator. Operation of the machine is made possible by two developments in this work: (a) Fast annealing by optically induced tremors in the energy landscape of the net. The objective of this scheme is to exploit the parallelism of the optical noise pattern so as to speed up the simulated annealing process. The procedure can be viewed as that of generating controlled gradually decreasing deformations or tremors in the energy landscape of the net that prevents entrapment in a local minimum energy state. Both the random drawing of neurons and the state update of the net are now done in parallel at the same time and without having to computer explicitly the change in the energy of the net and associated Boltzmann factor as required ordinarily in the Metropolis-Kirkpartrik simulated annealing algorithm. This leads to significant acceleration of the annealing process. (b) Stochastic learning with binary weights. Learning in opto-electronic neural nets can be simplified greatly if binary weights can be used. A third development, that is the development of schemes for driving and enhancing the frame rate of magneto-optic spatial light modulators, can make the machine learning speed potentially fast. Details of these developments together with the principle, architecture, structure, and performance evaluation of this machine are given.

  4. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model. PMID:24989866

  5. Comparative Analysis of Simulated Annealing (SA) and Simplified Generalized SA (SGSA) for Estimation Optimal of Parametric Functional in CATIVIC

    SciTech Connect

    Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando

    2009-08-13

    Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.

  6. Quantum Algorithms for Fermionic Simulations

    NASA Astrophysics Data System (ADS)

    Ortiz, Gerardo

    2001-06-01

    The probabilistic simulation of quantum systems in classical computers is known to be limited by the so-called sign or phase problem, a problem believed to be of exponential complexity. This ``disease" manifests itself by the exponentially hard task of estimating the expectation value of an observable with a given error. Therefore, probabilistic simulations on a classical computer do not seem to qualify as a practical computational scheme for general quantum many-body problems. The limiting factors, for whatever reasons, are negative or complex-valued probabilities whether the simulations are done in real or imaginary time. In 1981 Richard Feynman raised some provocative questions in connection to the ``exact imitation'' of such systems using a special device named a ``quantum computer.'' Feynman hesitated about the possibility of imitating fermion systems using such a device. Here we address some of his concerns and, in particular, investigate the simulation of fermionic systems. We show how quantum algorithms avoid the sign problem by reducing the complexity from exponential to polynomial. Our demonstration is based upon the use of isomorphisms of *-algebras (spin-particle transformations) which connect different models of quantum computation. In particular, we present fermionic models (the fabled ``Grassmann Chip''); but, of course, these models are not the only ones since our spin-particle connections allow us to introduce more ``esoteric'' models of computation. We present specific quantum algorithms that illustrate the main points of our algebraic approach.

  7. Retrieval of Surface and Subsurface Moisture of Bare Soil Using Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Tabatabaeenejad, A.; Moghaddam, M.

    2009-12-01

    Soil moisture is of fundamental importance to many hydrological and biological processes. Soil moisture information is vital to understanding the cycling of water, energy, and carbon in the Earth system. Knowledge of soil moisture is critical to agencies concerned with weather and climate, runoff potential and flood control, soil erosion, reservoir management, water quality, agricultural productivity, drought monitoring, and human health. The need to monitor the soil moisture on a global scale has motivated missions such as Soil Moisture Active and Passive (SMAP) [1]. Rough surface scattering models and remote sensing retrieval algorithms are essential in study of the soil moisture, because soil can be represented as a rough surface structure. Effects of soil moisture on the backscattered field have been studied since the 1960s, but soil moisture estimation remains a challenging problem and there is still a need for more accurate and more efficient inversion algorithms. It has been shown that the simulated annealing method is a powerful tool for inversion of the model parameters of rough surface structures [2]. The sensitivity of this method to measurement noise has also been investigated assuming a two-layer structure characterized by the layers dielectric constants, layer thickness, and statistical properties of the rough interfaces [2]. However, since the moisture profile varies with depth, it is sometimes necessary to model the rough surface as a layered structure with a rough interface on top and a stratified structure below where each layer is assumed to have a constant volumetric moisture content. In this work, we discretize the soil structure into several layers of constant moisture content to examine the effect of subsurface profile on the backscattering coefficient. We will show that while the moisture profile could vary in deeper layers, these layers do not affect the scattered electromagnetic field significantly. Therefore, we can use just a few layers

  8. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    SciTech Connect

    Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif

    2015-02-03

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.

  9. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    NASA Astrophysics Data System (ADS)

    Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif

    2015-02-01

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.

  10. An archived multi-objective simulated annealing for a dynamic cellular manufacturing system

    NASA Astrophysics Data System (ADS)

    Shirazi, Hossein; Kia, Reza; Javadian, Nikbakhsh; Tavakkoli-Moghaddam, Reza

    2014-05-01

    To design a group layout of a cellular manufacturing system (CMS) in a dynamic environment, a multi-objective mixed-integer non-linear programming model is developed. The model integrates cell formation, group layout and production planning (PP) as three interrelated decisions involved in the design of a CMS. This paper provides an extensive coverage of important manufacturing features used in the design of CMSs and enhances the flexibility of an existing model in handling the fluctuations of part demands more economically by adding machine depot and PP decisions. Two conflicting objectives to be minimized are the total costs and the imbalance of workload among cells. As the considered objectives in this model are in conflict with each other, an archived multi-objective simulated annealing (AMOSA) algorithm is designed to find Pareto-optimal solutions. Matrix-based solution representation, a heuristic procedure generating an initial and feasible solution and efficient mutation operators are the advantages of the designed AMOSA. To demonstrate the efficiency of the proposed algorithm, the performance of AMOSA is compared with an exact algorithm (i.e., ∈-constraint method) solved by the GAMS software and a well-known evolutionary algorithm, namely NSGA-II for some randomly generated problems based on some comparison metrics. The obtained results show that the designed AMOSA can obtain satisfactory solutions for the multi-objective model.

  11. OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2014-03-31

    The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.

  12. Constant thermodynamic speed for minimizing entropy production in thermodynamic processes and simulated annealing

    NASA Astrophysics Data System (ADS)

    Andresen, Bjarne; Gordon, J. M.

    1994-12-01

    For an arbitrary finite-time thermodynamic or information-based process we derive a lower bound on cumulative entropy production, as well as the associated optimal operating strategy for minimizing entropy production. The optimal path corresponds to a fixed rate of entropy production in the system, provided the rate of change is calculated in terms of the natural dimensionless time scale of the system. The constant thermodynamic speed algorithm for simulated annealing is derived from first principles and shown to be the leading term in a general expansion which represents the optimal solution. The results are valid for uniform systems (no spatial gradients) in which the involved intensive thermodynamic quantities are uniquely defined. The method and conclusions are easily extended to other objective functions, such as minimal loss of availability, and to assorted thermodynamic control variables.

  13. Shape optimization of road tunnel cross-section by simulated annealing

    NASA Astrophysics Data System (ADS)

    Sobótka, Maciej; Pachnicz, Michał

    2016-06-01

    The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA). The form of a cost function derives from the energetic optimality condition, formulated in the authors' previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.

  14. Formation Algorithms and Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward

    2004-01-01

    Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.

  15. Energy management of power-split plug-in hybrid electric vehicles based on simulated annealing and Pontryagin's minimum principle

    NASA Astrophysics Data System (ADS)

    Chen, Zheng; Mi, Chunting Chris; Xia, Bing; You, Chenwen

    2014-12-01

    In this paper, an energy management method is proposed for a power-split plug-in hybrid electric vehicle (PHEV). Through analyzing the PHEV powertrain, a series of quadratic equations are employed to approximate the vehicle's fuel-rate, using battery current as the input. Pontryagin's Minimum Principle (PMP) is introduced to find the battery current commands by solving the Hamiltonian function. Simulated Annealing (SA) algorithm is applied to calculate the engine-on power and the maximum current coefficient. Moreover, the battery state of health (SOH) is introduced to extend the application of the proposed algorithm. Simulation results verified that the proposed algorithm can reduce fuel-consumption compared to charge-depleting (CD) and charge-sustaining (CS) mode.

  16. A deterministic annealing algorithm for approximating a solution of the min-bisection problem.

    PubMed

    Dang, Chuangyin; Ma, Wei; Liang, Jiye

    2009-01-01

    The min-bisection problem is an NP-hard combinatorial optimization problem. In this paper an equivalent linearly constrained continuous optimization problem is formulated and an algorithm is proposed for approximating its solution. The algorithm is derived from the introduction of a logarithmic-cosine barrier function, where the barrier parameter behaves as temperature in an annealing procedure and decreases from a sufficiently large positive number to zero. The algorithm searches for a better solution in a feasible descent direction, which has a desired property that lower and upper bounds are always satisfied automatically if the step length is a number between zero and one. We prove that the algorithm converges to at least a local minimum point of the problem if a local minimum point of the barrier problem is generated for a sequence of descending values of the barrier parameter with a limit of zero. Numerical results show that the algorithm is much more efficient than two of the best existing heuristic methods for the min-bisection problem, Kernighan-Lin method with multiple starting points (MSKL) and multilevel graph partitioning scheme (MLGP).

  17. Picosecond and nanosecond laser annealing and simulation of amorphous silicon thin films for solar cell applications

    NASA Astrophysics Data System (ADS)

    Theodorakos, I.; Zergioti, I.; Vamvakas, V.; Tsoukalas, D.; Raptis, Y. S.

    2014-01-01

    In this work, a picosecond diode pumped solid state laser and a nanosecond Nd:YAG laser have been used for the annealing and the partial nano-crystallization of an amorphous silicon layer. These experiments were conducted as an alternative/complementary to plasma-enhanced chemical vapor deposition method for fabrication of micromorph tandem solar cell. The laser experimental work was combined with simulations of the annealing process, in terms of temperature distribution evolution, in order to predetermine the optimum annealing conditions. The annealed material was studied, as a function of several annealing parameters (wavelength, pulse duration, fluence), as far as it concerns its structural properties, by X-ray diffraction, SEM, and micro-Raman techniques.

  18. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    1989-01-01

    Inaccuracies in the length of members and the diameters of joints of large truss reflector backup structures may produce unacceptable levels of surface distortion and member forces. However, if the member lengths and joint diameters can be measured accurately it is possible to configure the members and joints so that root-mean-square (rms) surface error and/or rms member forces is minimized. Following Greene and Haftka (1989) it is assumed that the force vector f is linearly proportional to the member length errors e(sub M) of dimension NMEMB (the number of members) and joint errors e(sub J) of dimension NJOINT (the number of joints), and that the best-fit displacement vector d is a linear function of f. Let NNODES denote the number of positions on the surface of the truss where error influences are measured. The solution of the problem is discussed. To classify, this problem was compared to a similar combinatorial optimization problem. In particular, when only the member length errors are considered, minimizing d(sup 2)(sub rms) is equivalent to the quadratic assignment problem. The quadratic assignment problem is a well known NP-complete problem in operations research literature. Hence minimizing d(sup 2)(sub rms) is is also an NP-complete problem. The focus of the research is the development of a simulated annealing algorithm to reduce d(sup 2)(sub rms). The plausibility of this technique is its recent success on a variety of NP-complete combinatorial optimization problems including the quadratic assignment problem. A physical analogy for simulated annealing is the way liquids freeze and crystallize. All computational experiments were done on a MicroVAX. The two interchange heuristic is very fast but produces widely varying results. The two and three interchange heuristic provides less variability in the final objective function values but runs much more slowly. Simulated annealing produced the best objective function values for every starting configuration and

  19. Automatic Phase Picker for Local and Teleseismic Events Using Wavelet Transform and Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Gaillot, P.; Bardaine, T.; Lyon-Caen, H.

    2004-12-01

    Since recent years, various automatic phase pickers based on the wavelet transform have been developed. The main motivation for using wavelet transform is that they are excellent at finding the characteristics of transient signals, they have good time resolution at all periods, and they are easy to program for fast execution. Thus, the time-scale properties and flexibility of the wavelets allow detection of P and S phases in a broad frequency range making their utilization possible in various context. However, the direct application of an automatic picking program in a different context/network than the one for which it has been initially developed is quickly tedious. In fact, independently of the strategy involved in automatic picking algorithms (window average, autoregressive, beamforming, optimization filtering, neuronal network), all developed algorithms use different parameters that depend on the objective of the seismological study, the region and the seismological network. Classically, these parameters are manually defined by trial-error or calibrated learning stage. In order to facilitate this laborious process, we have developed an automated method that provide optimal parameters for the picking programs. The set of parameters can be explored using simulated annealing which is a generic name for a family of optimization algorithms based on the principle of stochastic relaxation. The optimization process amounts to systematically modifying an initial realization so as to decrease the value of the objective function, getting the realization acceptably close to the target statistics. Different formulations of the optimization problem (objective function) are discussed using (1) world seismicity data recorded by the French national seismic monitoring network (ReNass), (2) regional seismicity data recorded in the framework of the Corinth Rift Laboratory (CRL) experiment, (3) induced seismicity data from the gas field of Lacq (Western Pyrenees), and (4) micro

  20. Dynamical SCFT Simulations of Solvent Annealed Thin Films

    NASA Astrophysics Data System (ADS)

    Paradiso, Sean; Delaney, Kris; Ceniceros, Hector; Garcia-Cervera, Carlos; Fredrickson, Glenn

    2014-03-01

    Block copolymer thin films are ideal candidates for a broad range of technologies including rejection layers for ultrafiltration membranes, proton-exchange membranes in solar cells, optically active coatings, and lithographic masks for bit patterning storage media. Optimizing the performance of these materials often hinges on tuning the orientation and long-range order of the film's internal nanostructure. In response, solvent annealing techniques have been developed for their promise to afford additional flexibility in tuning thin film morphology, but pronounced processing history dependence and a dizzying parameter space have resulted in slow progress towards developing clear design rules for solvent annealing systems. In this talk, we will report recent theoretical progress in understanding the self assembly dynamics relevant to solvent-annealed and solution-cast block copolymer films. Emphasis will be placed on evaporation-induced ordering trends in both the slow and fast drying regimes for cylinder-forming block copolymers from initially ordered and disordered films, along with the role solvent selectivity plays in the ordering dynamics.

  1. Optimization of Sample Points for Monitoring Arable Land Quality by Simulated Annealing while Considering Spatial Variations

    PubMed Central

    Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng

    2016-01-01

    With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051

  2. Fast simulated annealing inversion of surface waves on pavement using phase-velocity spectra

    USGS Publications Warehouse

    Ryden, N.; Park, C.B.

    2006-01-01

    The conventional inversion of surface waves depends on modal identification of measured dispersion curves, which can be ambiguous. It is possible to avoid mode-number identification and extraction by inverting the complete phase-velocity spectrum obtained from a multichannel record. We use the fast simulated annealing (FSA) global search algorithm to minimize the difference between the measured phase-velocity spectrum and that calculated from a theoretical layer model, including the field setup geometry. Results show that this algorithm can help one avoid getting trapped in local minima while searching for the best-matching layer model. The entire procedure is demonstrated on synthetic and field data for asphalt pavement. The viscoelastic properties of the top asphalt layer are taken into account, and the inverted asphalt stiffness as a function of frequency compares well with laboratory tests on core samples. The thickness and shear-wave velocity of the deeper embedded layers are resolved within 10% deviation from those values measured separately during pavement construction. The proposed method may be equally applicable to normal soil site investigation and in the field of ultrasonic testing of materials. ?? 2006 Society of Exploration Geophysicists.

  3. Identifying fracture-zone geometry using simulated annealing and hydraulic-connection data

    USGS Publications Warehouse

    Day-Lewis, F. D.; Hsieh, P.A.; Gorelick, S.M.

    2000-01-01

    A new approach is presented to condition geostatistical simulation of high-permeability zones in fractured rock to hydraulic-connection data. A simulated-annealing algorithm generates three-dimensional (3-D) realizations conditioned to borehole data, inferred hydraulic connections between packer-isolated borehole intervals, and an indicator (fracture zone or background-K bedrock) variogram model of spatial variability. We apply the method to data from the U.S. Geological Survey Mirror Lake Site in New Hampshire, where connected high-permeability fracture zones exert a strong control on fluid flow at the hundred-meter scale. Single-well hydraulic-packer tests indicate where permeable fracture zones intersect boreholes, and multiple-well pumping tests indicate the degree of hydraulic connection between boreholes. Borehole intervals connected by a fracture zone exhibit similar hydraulic responses, whereas intervals not connected by a fracture zone exhibit different responses. Our approach yields valuable insights into the 3-D geometry of fracture zones at Mirror Lake. Statistical analysis of the realizations yields maps of the probabilities of intersecting specific fracture zones with additional wells. Inverse flow modeling based on the assumption of equivalent porous media is used to estimate hydraulic conductivity and specific storage and to identify those fracture-zone geometries that are consistent with hydraulic test data.

  4. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  5. Optimization of pressurized water reactor shuffling by simulated annealing with heuristics

    SciTech Connect

    Stevens, J.G.; Smith, K.S.; Rempe, K.R.; Downar, T.J.

    1995-09-01

    Simulated-annealing optimization of reactor core loading patterns is implemented with support for design heuristics during candidate pattern generation. The SIMAN optimization module uses the advanced nodal method of SIMULATE-3 and the full cross-section detail of CASMO-3 to evaluate accurately the neutronic performance of each candidate, resulting in high-quality patterns. The use of heuristics within simulated annealing is explored. Heuristics improve the consistency of optimization results for both fast- and slow-annealing runs with no penalty from the exclusion of unusual candidates. Thus, the heuristic application of designer judgment during automated pattern generation is shown to be effective. The capability of the SIMAN module to find and evaluate families of loading patterns that satisfy design constraints and have good objective performance within practical run times is demonstrated. The use of automated evaluations of successive cycles to explore multicycle effects of design decisions is discussed.

  6. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  7. A Simulated Annealing Methodology to Multiproduct Capacitated Facility Location with Stochastic Demand

    PubMed Central

    Xiang, Hui; Ye, Yong; Ni, Linglin

    2015-01-01

    A stochastic multiproduct capacitated facility location problem involving a single supplier and multiple customers is investigated. Due to the stochastic demands, a reasonable amount of safety stock must be kept in the facilities to achieve suitable service levels, which results in increased inventory cost. Based on the assumption of normal distributed for all the stochastic demands, a nonlinear mixed-integer programming model is proposed, whose objective is to minimize the total cost, including transportation cost, inventory cost, operation cost, and setup cost. A combined simulated annealing (CSA) algorithm is presented to solve the model, in which the outer layer subalgorithm optimizes the facility location decision and the inner layer subalgorithm optimizes the demand allocation based on the determined facility location decision. The results obtained with this approach shown that the CSA is a robust and practical approach for solving a multiple product problem, which generates the suboptimal facility location decision and inventory policies. Meanwhile, we also found that the transportation cost and the demand deviation have the strongest influence on the optimal decision compared to the others. PMID:25834839

  8. Vectorized algorithms for spiking neural network simulation.

    PubMed

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages. PMID:21395437

  9. Stochastic annealing simulation of copper under neutron irradiation

    SciTech Connect

    Heinisch, H.L.; Singh, B.N.

    1998-03-01

    This report is a summary of a presentation made at ICFRM-8 on computer simulations of defect accumulation during irradiation of copper to low doses at room temperature. The simulation results are in good agreement with experimental data on defect cluster densities in copper irradiated in RTNS-II.

  10. Obstacle Bypassing in Optimal Ship Routing Using Simulated Annealing

    SciTech Connect

    Kosmas, O. T.; Vlachos, D. S.; Simos, T. E.

    2008-11-06

    In this paper we are going to discuss a variation on the problem of finding the shortest path between two points in optimal ship routing problems consisting of obstacles that are not allowed to be crossed by the path. Our main goal are going to be the construction of an appropriate algorithm, based in an earlier work by computing the shortest path between two points in the plane that avoids a set of polygonal obstacles.

  11. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  12. Computer-Assisted Scheduling of Army Unit Training: An Application of Simulated Annealing.

    ERIC Educational Resources Information Center

    Hart, Roland J.; Goehring, Dwight J.

    This report of an ongoing research project intended to provide computer assistance to Army units for the scheduling of training focuses on the feasibility of simulated annealing, a heuristic approach for solving scheduling problems. Following an executive summary and brief introduction, the document is divided into three sections. First, the Army…

  13. Folding simulations of gramicidin A into the beta-helix conformations: Simulated annealing molecular dynamics study.

    PubMed

    Mori, Takaharu; Okamoto, Yuko

    2009-10-28

    Gramicidin A is a linear hydrophobic 15-residue peptide which consists of alternating D- and L-amino acids and forms a unique tertiary structure, called the beta(6.3)-helix, to act as a cation-selective ion channel in the natural conditions. In order to investigate the intrinsic ability of the gramicidin A monomer to form secondary structures, we performed the folding simulation of gramicidin A using a simulated annealing molecular dynamics (MD) method in vacuum mimicking the low-dielectric, homogeneous membrane environment. The initial conformation was a fully extended one. From the 200 different MD runs, we obtained a right-handed beta(4.4)-helix as the lowest-potential-energy structure, and left-handed beta(4.4)-helix, right-handed and left-handed beta(6.3)-helix as local-minimum energy states. These results are in accord with those of the experiments of gramicidin A in homogeneous organic solvent. Our simulations showed a slight right-hand sense in the lower-energy conformations and a quite beta-sheet-forming tendency throughout almost the entire sequence. In order to examine the stability of the obtained right-handed beta(6.3)-helix and beta(4.4)-helix structures in more realistic membrane environment, we have also performed all-atom MD simulations in explicit water, ion, and lipid molecules, starting from these beta-helix structures. The results suggested that beta(6.3)-helix is more stable than beta(4.4)-helix in the inhomogeneous, explicit membrane environment, where the pore water and the hydrogen bonds between Trp side-chains and lipid-head groups have a role to further stabilize the beta(6.3)-helix conformation.

  14. Folding simulations of gramicidin A into the β-helix conformations: Simulated annealing molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Mori, Takaharu; Okamoto, Yuko

    2009-10-01

    Gramicidin A is a linear hydrophobic 15-residue peptide which consists of alternating D- and L-amino acids and forms a unique tertiary structure, called the β6.3-helix, to act as a cation-selective ion channel in the natural conditions. In order to investigate the intrinsic ability of the gramicidin A monomer to form secondary structures, we performed the folding simulation of gramicidin A using a simulated annealing molecular dynamics (MD) method in vacuum mimicking the low-dielectric, homogeneous membrane environment. The initial conformation was a fully extended one. From the 200 different MD runs, we obtained a right-handed β4.4-helix as the lowest-potential-energy structure, and left-handed β4.4-helix, right-handed and left-handed β6.3-helix as local-minimum energy states. These results are in accord with those of the experiments of gramicidin A in homogeneous organic solvent. Our simulations showed a slight right-hand sense in the lower-energy conformations and a quite β-sheet-forming tendency throughout almost the entire sequence. In order to examine the stability of the obtained right-handed β6.3-helix and β4.4-helix structures in more realistic membrane environment, we have also performed all-atom MD simulations in explicit water, ion, and lipid molecules, starting from these β-helix structures. The results suggested that β6.3-helix is more stable than β4.4-helix in the inhomogeneous, explicit membrane environment, where the pore water and the hydrogen bonds between Trp side-chains and lipid-head groups have a role to further stabilize the β6.3-helix conformation.

  15. Joint Optimization of Vertical Component Gravity and Seismic P-wave First Arrivals by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.

    2015-12-01

    Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could

  16. Finding Low-Temperature States with Parallel Tempering, Simulated Annealing and Simple Monte Carlo

    NASA Astrophysics Data System (ADS)

    Moreno, J. J.; Katzgraber, Helmut G.; Hartmann, Alexander K.

    Monte Carlo simulation techniques, like simulated annealing and parallel tempering, are often used to evaluate low-temperature properties and find ground states of disordered systems. Here we compare these methods using direct calculations of ground states for three-dimensional Ising diluted antiferromagnets in a field (DAFF) and three-dimensional Ising spin glasses (ISG). For the DAFF, we find that, with respect to obtaining ground states, parallel tempering is superior to simple Monte Carlo and to simulated annealing. However, equilibration becomes more difficult with increasing magnitude of the externally applied field. For the ISG with bimodal couplings, which exhibits a high degeneracy, we conclude that finding true ground states is easy for small systems, as is already known. But finding each of the degenerate ground states with the same probability (or frequency), as required by Boltzmann statistics, is considerably harder and becomes almost impossible for larger systems.

  17. Genetic Algorithms for Digital Quantum Simulations

    NASA Astrophysics Data System (ADS)

    Las Heras, U.; Alvarez-Rodriguez, U.; Solano, E.; Sanz, M.

    2016-06-01

    We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.

  18. Partially linearized algorithms in gyrokinetic particle simulation

    SciTech Connect

    Dimits, A.M.; Lee, W.W.

    1990-10-01

    In this paper, particle simulation algorithms with time-varying weights for the gyrokinetic Vlasov-Poisson system have been developed. The primary purpose is to use them for the removal of the selected nonlinearities in the simulation of gradient-driven microturbulence so that the relative importance of the various nonlinear effects can be assessed. It is hoped that the use of these procedures will result in a better understanding of the transport mechanisms and scaling in tokamaks. Another application of these algorithms is for the improvement of the numerical properties of the simulation plasma. For instance, implementations of such algorithms (1) enable us to suppress the intrinsic numerical noise in the simulation, and (2) also make it possible to regulate the weights of the fast-moving particles and, in turn, to eliminate the associated high frequency oscillations. Examples of their application to drift-type instabilities in slab geometry are given. We note that the work reported here represents the first successful use of the weighted algorithms in particle codes for the nonlinear simulation of plasmas.

  19. Extending fragment-based free energy calculations with library Monte Carlo simulation: annealing in interaction space.

    PubMed

    Lettieri, Steven; Mamonov, Artem B; Zuckerman, Daniel M

    2011-04-30

    Pre-calculated libraries of molecular fragment configurations have previously been used as a basis for both equilibrium sampling (via library-based Monte Carlo) and for obtaining absolute free energies using a polymer-growth formalism. Here, we combine the two approaches to extend the size of systems for which free energies can be calculated. We study a series of all-atom poly-alanine systems in a simple dielectric solvent and find that precise free energies can be obtained rapidly. For instance, for 12 residues, less than an hour of single-processor time is required. The combined approach is formally equivalent to the annealed importance sampling algorithm; instead of annealing by decreasing temperature, however, interactions among fragments are gradually added as the molecule is grown. We discuss implications for future binding affinity calculations in which a ligand is grown into a binding site.

  20. Experimental and Numerical Simulations of Phase Transformations Occurring During Continuous Annealing of DP Steel Strips

    NASA Astrophysics Data System (ADS)

    Wrożyna, Andrzej; Pernach, Monika; Kuziak, Roman; Pietrzyk, Maciej

    2016-04-01

    Due to their exceptional strength properties combined with good workability the Advanced High-Strength Steels (AHSS) are commonly used in automotive industry. Manufacturing of these steels is a complex process which requires precise control of technological parameters during thermo-mechanical treatment. Design of these processes can be significantly improved by the numerical models of phase transformations. Evaluation of predictive capabilities of models, as far as their applicability in simulation of thermal cycles thermal cycles for AHSS is considered, was the objective of the paper. Two models were considered. The former was upgrade of the JMAK equation while the latter was an upgrade of the Leblond model. The models can be applied to any AHSS though the examples quoted in the paper refer to the Dual Phase (DP) steel. Three series of experimental simulations were performed. The first included various thermal cycles going beyond limitations of the continuous annealing lines. The objective was to validate models behavior in more complex cooling conditions. The second set of tests included experimental simulations of the thermal cycle characteristic for the continuous annealing lines. Capability of the models to describe properly phase transformations in this process was evaluated. The third set included data from the industrial continuous annealing line. Validation and verification of models confirmed their good predictive capabilities. Since it does not require application of the additivity rule, the upgrade of the Leblond model was selected as the better one for simulation of industrial processes in AHSS production.

  1. A hierarchical exact accelerated stochastic simulation algorithm

    PubMed Central

    Orendorff, David; Mjolsness, Eric

    2012-01-01

    A new algorithm, “HiER-leap” (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled “blocks” and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms. PMID:23231214

  2. Acoustic simulation in architecture with parallel algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiaohong; Zhang, Xinrong; Li, Dan

    2004-03-01

    In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.

  3. Optimizing the natural connectivity of scale-free networks using simulated annealing

    NASA Astrophysics Data System (ADS)

    Duan, Boping; Liu, Jing; Tang, Xianglong

    2016-09-01

    In real-world networks, the path between two nodes always plays a significant role in the fields of communication or transportation. In some cases, when one path fails, the two nodes cannot communicate any more. Thus, it is necessary to increase alternative paths between nodes. In the recent work (Wu et al., 2011), Wu et al. proposed the natural connectivity as a novel robustness measure of complex networks. The natural connectivity considers the redundancy of alternative paths in a network by computing the number of closed paths of all lengths. To enhance the robustness of networks in terms of the natural connectivity, in this paper, we propose a simulated annealing method to optimize the natural connectivity of scale-free networks without changing the degree distribution. The experimental results show that the simulated annealing method clearly outperforms other local search methods.

  4. The systems biology simulation core algorithm

    PubMed Central

    2013-01-01

    Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941

  5. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Padula, Sharon L.

    1990-01-01

    Inaccuracies in the length of members and the diameters of joints of large space structures may produce unacceptable levels of surface distortion and internal forces. Here, two discrete optimization problems are formulated, one to minimize surface distortion (DSQRMS) and the other to minimize internal forces (FSQRMS). Both of these problems are based on the influence matrices generated by a small-deformation linear analysis. Good solutions are obtained for DSQRMS and FSQRMS through the use of a simulated annealing heuristic.

  6. The 23rd Optoelectronic Workshop: Optical System Assessment for Design and Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Forbes, Gregory; Spande, Robert

    1990-08-01

    This workshop on Optical System Assessment for Design and Simulated Annealing represents the twenty-third of a series of intensive academic/government interactions in the field of advanced electro-optics, as part of the Army sponsored University Research Initiative. By documenting the associated technology status and dialogue it is hoped that this baseline will serve all interested parties towards providing a solution to high priority Army requirements.

  7. Annealing of ion irradiated high T{sub C} Josephson junctions studied by numerical simulations

    SciTech Connect

    Sirena, M.; Matzen, S.; Bergeal, N.; Lesueur, J.; Faini, G.; Bernard, R.; Briatico, J.; Crete, D. G.

    2009-01-15

    Recently, annealing of ion irradiated high T{sub c} Josephson iunctions (JJs) has been studied experimentally in the perspective of improving their reproducibility. Here we present numerical simulations based on random walk and Monte Carlo calculations of the evolution of JJ characteristics such as the transition temperature T{sub c}{sup '} and its spread {delta}T{sub c}{sup '}, and compare them with experimental results on junctions irradiated with 100 and 150 keV oxygen ions, and annealed at low temperatures (below 80 deg. C). We have successfully used a vacancy-interstitial annihilation mechanism to describe the evolution of the T{sub c}{sup '} and the homogeneity of a JJ array, analyzing the evolution of the defects density mean value and its distribution width. The annealing first increases the spread in T{sub c}{sup '} for short annealing times due to the stochastic nature of the process, but then tends to reduce it for longer times, which is interesting for technological applications.

  8. Improve earthquake hypocenter using adaptive simulated annealing inversion in regional tectonic, volcano tectonic, and geothermal observation

    SciTech Connect

    Ry, Rexha Verdhora; Nugraha, Andri Dian

    2015-04-24

    Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.

  9. Validation of Sensor-Directed Spatial Simulated Annealing Soil Sampling Strategy.

    PubMed

    Scudiero, Elia; Lesch, Scott M; Corwin, Dennis L

    2016-07-01

    Soil spatial variability has a profound influence on most agronomic and environmental processes at field and landscape scales, including site-specific management, vadose zone hydrology and transport, and soil quality. Mobile sensors are a practical means of mapping spatial variability because their measurements serve as a proxy for many soil properties, provided a sensor-soil calibration is conducted. A viable means of calibrating sensor measurements over soil properties is through linear regression modeling of sensor and target property data. In the present study, two sensor-directed, model-based, sampling scheme delineation methods were compared to validate recent applications of soil apparent electrical conductivity (EC)-directed spatial simulated annealing against the more established EC-directed response surface sampling design (RSSD) approach. A 6.8-ha study area near San Jacinto, CA, was surveyed for EC, and 30 soil sampling locations per sampling strategy were selected. Spatial simulated annealing and RSSD were compared for sensor calibration to a target soil property (i.e., salinity) and for evenness of spatial coverage of the study area, which is beneficial for mapping nontarget soil properties (i.e., those not correlated with EC). The results indicate that the linear modeling EC-salinity calibrations obtained from the two sampling schemes provided salinity maps characterized by similar errors. The maps of nontarget soil properties show similar errors across sampling strategies. The Spatial Simulated Annealing methodology is, therefore, validated, and its use in agronomic and environmental soil science applications is justified. PMID:27380070

  10. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    PubMed Central

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  11. Computer simulations of randomly branching polymers: annealed versus quenched branching structures

    NASA Astrophysics Data System (ADS)

    Rosa, Angelo; Everaers, Ralf

    2016-08-01

    We present computer simulations of three systems of randomly branching polymers in d = 3 dimensions: ideal trees and self-avoiding trees with annealed and quenched connectivities. In all cases, we performed a detailed analysis of trees connectivities, spatial conformations and statistical properties of linear paths on trees, and compare the results to the corresponding predictions of Flory theory. We confirm that, overall, the theory predicts correctly that trees with quenched ideal connectivity exhibit less overall swelling in good solvent than corresponding trees with annealed connectivity even though they are more strongly stretched on the path level. At the same time, we emphasize the inadequacy of the Flory theory in predicting the behaviour of other, and equally relevant, observables like contact probabilities between tree nodes. We show, then, that contact probabilities can be aptly characterized by introducing a novel critical exponent, {θ }{path}, which accounts for how they decay as a function of the node-to-node path distance on the tree.

  12. Computational algorithms for simulations in atmospheric optics.

    PubMed

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.

  13. Computational algorithms for simulations in atmospheric optics.

    PubMed

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors. PMID:27140113

  14. Genetic Algorithms for Digital Quantum Simulations.

    PubMed

    Las Heras, U; Alvarez-Rodriguez, U; Solano, E; Sanz, M

    2016-06-10

    We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors. PMID:27341220

  15. Coupled and decoupled algorithms for semiconductor simulation

    NASA Astrophysics Data System (ADS)

    Kerkhoven, T.

    1985-12-01

    Algorithms for the numerical simulation are analyzed by computers of the steady state behavior of MOSFETs. The discretization and linearization of the nonlinear partial differential equations as well as the solution of the linearized systems are treated systematically. Thus we generate equations which do not exceed the floating point representations of modern computers and for which charge is conserved while appropriate maximum principles are preserved. A typical decoupling algorithm of the solution of the system of pde is analyzed as a fixed point mapping T. Bounds exist on the components of the solution and for sufficiently regular boundary geometries higher regularity of the derivatives as well. T is a contraction for sufficiently small variation of the boundary data. It therefore follows that under those conditions the decoupling algorithm coverges to a unique fixed point which is the weak solution to the system of pdes in divergence form. A discrete algorithm which corresponds to a possible computer code is shown to converge if the discretizaion of the pde preserves the regularity properties mentioned above. A stronger convergence result is obtained by employing the higher regularity for enforcing the weak formulations of the pde more strongly. The execution speed of a modification of Newton's method, two versions of a decoupling approach and a new mixed solution algorithm are compared for a range of problems. The asymptotic complexity of the solution of the linear systems is identical for these approaches in the context of sparse direct solvers if the ordering is done in an optimal way.

  16. Displacement cascades and defect annealing in tungsten, Part II: Object kinetic Monte Carlo Simulation of Tungsten Cascade Aging

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2015-07-01

    The results of object kinetic Monte Carlo (OKMC) simulations of the annealing of primary cascade damage in bulk tungsten using a comprehensive database of cascades obtained from molecular dynamics (Setyawan et al.) are described as a function of primary knock-on atom (PKA) energy at temperatures of 300, 1025 and 2050 K. An increase in SIA clustering coupled with a decrease in vacancy clustering with increasing temperature, in addition to the disparate mobilities of SIAs versus vacancies, causes an interesting effect of temperature on cascade annealing. The annealing efficiency (the ratio of the number of defects after and before annealing) exhibits an inverse U-shape curve as a function of temperature. The capabilities of the newly developed OKMC code KSOME (kinetic simulations of microstructure evolution) used to carry out these simulations are described.

  17. Displacement cascades and defect annealing in tungsten, Part II: Object kinetic Monte Carlo simulation of tungsten cascade aging

    NASA Astrophysics Data System (ADS)

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2015-07-01

    The results of object kinetic Monte Carlo (OKMC) simulations of the annealing of primary cascade damage in bulk tungsten using a comprehensive database of cascades obtained from molecular dynamics (Setyawan et al.) are described as a function of primary knock-on atom (PKA) energy at temperatures of 300, 1025 and 2050 K. An increase in SIA clustering coupled with a decrease in vacancy clustering with increasing temperature, in addition to the disparate mobilities of SIAs versus vacancies, causes an interesting effect of temperature on cascade annealing. The annealing efficiency (the ratio of the number of defects after and before annealing) exhibits an inverse U-shape curve as a function of temperature. The capabilities of the newly developed OKMC code KSOME (kinetic simulations of microstructure evolution) used to carry out these simulations are described.

  18. Annealing effect on thermodynamic and physical properties of mesoporous silicon: A simulation and nitrogen sorption study

    NASA Astrophysics Data System (ADS)

    Kumar, Pushpendra; Huber, Patrick

    2016-04-01

    Discovery of porous silicon formation in silicon substrate in 1956 while electro-polishing crystalline Si in hydrofluoric acid (HF), has triggered large scale investigations of porous silicon formation and their changes in physical and chemical properties with thermal and chemical treatment. A nitrogen sorption study is used to investigate the effect of thermal annealing on electrochemically etched mesoporous silicon (PS). The PS was thermally annealed from 200˚C to 800˚C for 1 hr in the presence of air. It was shown that the pore diameter and porosity of PS vary with annealing temperature. The experimentally obtained adsorption / desorption isotherms show hysteresis typical for capillary condensation in porous materials. A simulation study based on Saam and Cole model was performed and compared with experimentally observed sorption isotherms to study the physics behind of hysteresis formation. We discuss the shape of the hysteresis loops in the framework of the morphology of the layers. The different behavior of adsorption and desorption of nitrogen in PS with pore diameter was discussed in terms of concave menisci formation inside the pore space, which was shown to related with the induced pressure in varying the pore diameter from 7.2 nm to 3.4 nm.

  19. Neighbourhood generation mechanism applied in simulated annealing to job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Cruz-Chávez, Marco Antonio

    2015-11-01

    This paper presents a neighbourhood generation mechanism for the job shop scheduling problems (JSSPs). In order to obtain a feasible neighbour with the generation mechanism, it is only necessary to generate a permutation of an adjacent pair of operations in a scheduling of the JSSP. If there is no slack time between the adjacent pair of operations that is permuted, then it is proven, through theory and experimentation, that the new neighbour (schedule) generated is feasible. It is demonstrated that the neighbourhood generation mechanism is very efficient and effective in a simulated annealing.

  20. Design of stapled DNA-minor-groove-binding molecules with a mutable atom simulated annealing method.

    PubMed

    Walker, W L; Kopka, M L; Dickerson, R E; Goodsell, D S

    1997-11-01

    We report the design of optimal linker geometries for the synthesis of stapled DNA-minor-groove-binding molecules. Netropsin, distamycin, and lexitropsins bind side-by-side to mixed-sequence DNA and offer an opportunity for the design of sequence-reading molecules. Stapled molecules, with two molecules covalently linked side-by-side, provide entropic gains and restrain the position of one molecule relative to its neighbor. Using a free-atom simulated annealing technique combined with a discrete mutable atom definition, optimal lengths and atomic composition for covalent linkages are determined, and a novel hydrogen bond 'zipper' is proposed to phase two molecules accurately side-by-side.

  1. Simulated annealing applied to two-dimensional low-beta reduced magnetohydrodynamics

    SciTech Connect

    Chikasue, Y.; Furukawa, M.

    2015-02-15

    The simulated annealing (SA) method is applied to two-dimensional (2D) low-beta reduced magnetohydrodynamics (R-MHD). We have successfully obtained stationary states of the system numerically by the SA method with Casimir invariants preserved. Since the 2D low-beta R-MHD has two fields, the relaxation process becomes complex compared to a single field system such as 2D Euler flow. The obtained stationary state can have fine structure. We have found that the fine structure appears because the relaxation processes are different between kinetic energy and magnetic energy.

  2. Utilizing microstructural characteristics to derive insights into deformation and annealing behaviour: Numerical simulations, experiments and nature

    NASA Astrophysics Data System (ADS)

    Piazolo, Sandra; Montagnat, Maurine; Prakash, Abhishek; Borthwick, Verity; Evans, Lynn; Griera, Albert; Bons, Paul D.; Svahnberg, Henrik; Prior, David J.

    2015-04-01

    Understanding the influence of the pre-existing microstructure on subsequent microstructural development is pivotal for the correct interpretation of rocks and ice that stayed at high homologous temperatures over a significant period of time. The microstructural behaviour of these materials through time has an important bearing on the interpretation of characteristics such as grain size, for example, using grain size statistics to detect former high strain zones that remain at high temperatures but low stress. We present a coupled experimental and modelling approach to better understand the evolution of recrystallization characteristics as a function of deformation-annealing time paths in a material with a high viscoplastic anisotropy e.g. polycrystalline ice and magnesium alloys. Deformation microstructures such as crystal bending, subgrain boundaries, grain size variation significantly influence the deformation and annealing behaviour of crystalline material. For numerical simulations we utilize the microdynamic modelling platform, Elle (www.elle.ws), taking local microstructural evolution into account to simulate the following processes: recovery within grains, rotational recrystallization, grain boundary migration and nucleation. We first test the validity of the numerical simulations against experiments, and then use the model to interpret microstructural features in natural examples. In-situ experiments are performed on laboratory grown and deformed ice and magnesium alloy. Our natural example is a deformed then recrystallized anorthosite from SW Greenland. The presented approach can be applied to many other minerals and crystalline materials.

  3. Molecular Dynamics Simulated Annealing Study of Gramicidin A in Water and the Hydrophobic Environment

    NASA Astrophysics Data System (ADS)

    Mori, Takaharu; Okamoto, Yuko

    2008-03-01

    Gramicidin A is a hydrophobic 15-residue peptide with alternating D- and L-amino acids, and it forms various conformations depending on its environment. For example, gramicidin A adopts a random coil or helical conformations, such as &4.4circ;-helix, &6.3circ;-helix, and double-stranded helix in organic solvents. To investigate the structural and dynamical properties of gramicidin A in water and the hydrophobic environment, we performed molecular dynamics simulated annealing simulations with implicit solvent based on a generalized Born model. From the simulations, it was found that gramicidin A has a strong tendency to form a random-coil structure in water, while in the hydrophobic environment it becomes compact and can fold into right- and left-handed conformations of β-helix structures. We discuss the folding mechanism of the β-helix conformation of gramicidin A.

  4. Simultaneous retrieval of the complex refractive indices of the core and shell of coated aerosol particles from extinction measurements using simulated annealing.

    PubMed

    Erlick, Carynelisa; Haspel, Mitch; Rudich, Yinon

    2011-08-01

    Simultaneously retrieving the complex refractive indices of the core and shell of coated aerosol particles given the measured extinction efficiency as a function of particle dimensions (core diameter and coated diameter) is much more difficult than retrieving the complex refractive index of homogeneous aerosol particles. Not only must the minimization be performed over a four-parameter space, making it less efficient, but in addition the absolute value of the difference between the measured extinction and the calculated extinction does not have an easily distinguished global minimum. Rather, there are a number of local minima to which almost all conventional retrieval algorithms converge. In this work, we develop a new (to our knowledge) retrieval algorithm that employs the numerical method known as simulated annealing with an innovative "temperature" schedule. This study is limited only to spherical particles with a concentric shell and to cases in which the diameter of both the core and the coated particle are known. We find that when the top ranking particle sizes according to their information content are combined from separate experiments to make up the particle size distribution, the simulated annealing retrieval algorithm is quite robust and by far superior to a greedy random perturbation approach often used.

  5. Parallel algorithm strategies for circuit simulation.

    SciTech Connect

    Thornquist, Heidi K.; Schiek, Richard Louis; Keiter, Eric Richard

    2010-01-01

    Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. However, they have been pushed to their performance limits in addressing circuit design challenges that come from the technology drivers of smaller feature scales and higher integration. Improving the performance of circuit simulation tools through exploiting new opportunities in widely-available multi-processor architectures is a logical next step. Unfortunately, not all traditional simulation applications are inherently parallel, and quickly adapting mature application codes (even codes designed to parallel applications) to new parallel paradigms can be prohibitively difficult. In general, performance is influenced by many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, the use of mini-applications small self-contained proxies for real applications is an excellent approach for rapidly exploring the parameter space of all these choices. In this report we present a multi-core performance study of Xyce, a transistor-level circuit simulation tool, and describe the future development of a mini-application for circuit simulation.

  6. Experimental signature of programmable quantum annealing.

    PubMed

    Boixo, Sergio; Albash, Tameem; Spedalieri, Federico M; Chancellor, Nicholas; Lidar, Daniel A

    2013-01-01

    Quantum annealing is a general strategy for solving difficult optimization problems with the aid of quantum adiabatic evolution. Both analytical and numerical evidence suggests that under idealized, closed system conditions, quantum annealing can outperform classical thermalization-based algorithms such as simulated annealing. Current engineered quantum annealing devices have a decoherence timescale which is orders of magnitude shorter than the adiabatic evolution time. Do they effectively perform classical thermalization when coupled to a decohering thermal environment? Here we present an experimental signature which is consistent with quantum annealing, and at the same time inconsistent with classical thermalization. Our experiment uses groups of eight superconducting flux qubits with programmable spin-spin couplings, embedded on a commercially available chip with >100 functional qubits. This suggests that programmable quantum devices, scalable with current superconducting technology, implement quantum annealing with a surprising robustness against noise and imperfections.

  7. TOPICAL REVIEW: Elemental thin film depth profiles by ion beam analysis using simulated annealing - a new tool

    NASA Astrophysics Data System (ADS)

    Jeynes, C.; Barradas, N. P.; Marriott, P. K.; Boudreault, G.; Jenkin, M.; Wendler, E.; Webb, R. P.

    2003-04-01

    Rutherford backscattering spectrometry (RBS) and related techniques have long been used to determine the elemental depth profiles in films a few nanometres to a few microns thick. However, although obtaining spectra is very easy, solving the inverse problem of extracting the depth profiles from the spectra is not possible analytically except for special cases. It is because these special cases include important classes of samples, and because skilled analysts are adept at extracting useful qualitative information from the data, that ion beam analysis is still an important technique. We have recently solved this inverse problem using the simulated annealing algorithm. We have implemented the solution in the `IBA DataFurnace' code, which has been developed into a very versatile and general new software tool that analysts can now use to rapidly extract quantitative accurate depth profiles from real samples on an industrial scale. We review the features, applicability and validation of this new code together with other approaches to handling IBA (ion beam analysis) data, with particular attention being given to determining both the absolute accuracy of the depth profiles and statistically accurate error estimates. We include examples of analyses using RBS, non-Rutherford elastic scattering, elastic recoil detection and non-resonant nuclear reactions. High depth resolution and the use of multiple techniques simultaneously are both discussed. There is usually systematic ambiguity in IBA data and Butler's example of ambiguity (1990 Nucl. Instrum. Methods B 45 160-5) is reanalysed. Analyses are shown: of evaporated, sputtered, oxidized, ion implanted, ion beam mixed and annealed materials; of semiconductors, optical and magnetic multilayers, superconductors, tribological films and metals; and of oxides on Si, mixed metal silicides, boron nitride, GaN, SiC, mixed metal oxides, YBCO and polymers.

  8. Efficient algorithms for wildland fire simulation

    NASA Astrophysics Data System (ADS)

    Kondratenko, Volodymyr Y.

    In this dissertation, we develop the multiple-source shortest path algorithms and examine their application importance in real world problems, such as wildfire modeling. The theoretical basis and its implementation in the Weather Research Forecasting (WRF) model coupled with the fire spread code SFIRE (WRF-SFIRE model) are described. We present a data assimilation method that gives the fire spread model the ability to start the fire simulation from an observed fire perimeter instead of an ignition point. While the model is running, the fire state in the model changes in accordance with the new arriving data by data assimilation. As the fire state changes, the atmospheric state (which is strongly effected by heat flux) does not stay consistent with the fire state. The main difficulty of this methodology occurs in coupled fire-atmosphere models, because once the fire state is modified to match a given starting perimeter, the atmospheric circulation is no longer in sync with it. One of the possible solutions to this problem is a formation of the artificial time of ignition history from an earlier fire state, which is later used to replay the fire progression to the new perimeter with the proper heat fluxes fed into the atmosphere, so that the fire induced circulation is established. In this work, we develop efficient algorithms that start from the fire arrival times given at the set of points (called a perimeter) and create the artificial fire time of ignition and fire spread rate history. Different algorithms were developed in order to suit possible demands of the user, such as implementation in parallel programming, minimization of the required amount of iterations and memory use, and use of the rate of spread as a time dependent variable. For the algorithms that deal with the homogeneous rate of spread, it was proven that the values of fire arrival times they produce are optimal. It was also shown that starting from arbitrary initial state the algorithms have

  9. Clutter discrimination algorithm simulation in pulse laser radar imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule

    2015-10-01

    Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.

  10. Feedback algorithm for simulation of multi-segmented cracks

    SciTech Connect

    Chady, T.; Napierala, L.

    2011-06-23

    In this paper, a method for obtaining a three dimensional crack model from a radiographic image is discussed. A genetic algorithm aiming at close simulation of crack's shape is presented. Results obtained with genetic algorithm are compared to those achieved in authors' previous work. The described algorithm has been tested on both simulated and real-life cracks.

  11. A simulated annealing approach to schedule optimization for the SES facility

    NASA Technical Reports Server (NTRS)

    Mcmahon, Mary Beth; Dean, Jack

    1992-01-01

    The Shuttle Engineering Simulator (SES) is a facility which houses the software and hardware for a variety of simulation systems. The simulators include the Autonomous Remote Manipulator, the Manned Maneuvering Unit, Orbiter/Space Station docking, and shuttle entry and landing. The SES simulators are used by various groups throughout NASA. For example, astronauts use the SES to practice maneuvers with the shuttle equipment; programmers use the SES to test flight software; and engineers use the SES for design and analysis studies. Due to its high demand, the SES is busy twenty-four hours a day and seven days a week. Scheduling the facility is a problem that is constantly growing and changing with the addition of new equipment. Currently a number of small independent programs have been developed to help solve the problem, but the long-term answer lies in finding a flexible, integrated system that provides the user with the ability to create, optimize, and edit the schedule. COMPASS is an interactive and highly flexible scheduling system. However, until recently COMPASS did not provide any optimization features. This paper describes the simulated annealing extension to COMPASS. It now allows the user to interweave schedule creation, revision, and optimization. This practical approach was necessary in order to satisfy the operational requirements of the SES.

  12. Epitaxial growth of graphene on 6H-silicon carbide substrate by simulated annealing method.

    PubMed

    Yoon, T L; Lim, T L; Min, T K; Hung, S H; Jakse, N; Lai, S K

    2013-11-28

    We grew graphene epitaxially on 6H-SiC(0001) substrate by the simulated annealing method. The mechanisms that govern the growth process were investigated by testing two empirical potentials, namely, the widely used Tersoff potential [J. Tersoff, Phys. Rev. B 39, 5566 (1989)] and its more refined version published years later by Erhart and Albe [Phys. Rev. B 71, 035211 (2005)]. Upon contrasting the results obtained by these two potentials, we found that the potential proposed by Erhart and Albe is generally more physical and realistic, since the annealing temperature at which the graphene structure just coming into view at approximately 1200 K is unambiguously predicted and close to the experimentally observed pit formation at 1298 K within which the graphene nucleates. We evaluated the reasonableness of our layers of graphene by calculating carbon-carbon (i) average bond-length, (ii) binding energy, and (iii) pair correlation function. Also, we compared with related experiments the various distance of separation parameters between the overlaid layers of graphene and substrate surface. PMID:24289364

  13. Application of simulated annealing to solve multi-objectives for aggregate production planning

    NASA Astrophysics Data System (ADS)

    Atiya, Bayda; Bakheet, Abdul Jabbar Khudhur; Abbas, Iraq Tereq; Bakar, Mohd. Rizam Abu; Soon, Lee Lai; Monsi, Mansor Bin

    2016-06-01

    Aggregate production planning (APP) is one of the most significant and complicated problems in production planning and aim to set overall production levels for each product category to meet fluctuating or uncertain demand in future. and to set decision concerning hiring, firing, overtime, subcontract, carrying inventory level. In this paper, we present a simulated annealing (SA) for multi-objective linear programming to solve APP. SA is considered to be a good tool for imprecise optimization problems. The proposed model minimizes total production and workforce costs. In this study, the proposed SA is compared with particle swarm optimization (PSO). The results show that the proposed SA is effective in reducing total production costs and requires minimal time.

  14. Simulated Annealing Based Hybrid Forecast for Improving Daily Municipal Solid Waste Generation Prediction

    PubMed Central

    Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei

    2014-01-01

    A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508

  15. Simulated annealing approach to vascular structure with application to the coronary arteries

    PubMed Central

    Keelan, Jonathan; Chung, Emma M. L.; Hague, James P.

    2016-01-01

    Do the complex processes of angiogenesis during organism development ultimately lead to a near optimal coronary vasculature in the organs of adult mammals? We examine this hypothesis using a powerful and universal method, built on physical and physiological principles, for the determination of globally energetically optimal arterial trees. The method is based on simulated annealing, and can be used to examine arteries in hollow organs with arbitrary tissue geometries. We demonstrate that the approach can generate in silico vasculatures which closely match porcine anatomical data for the coronary arteries on all length scales, and that the optimized arterial trees improve systematically as computational time increases. The method presented here is general, and could in principle be used to examine the arteries of other organs. Potential applications include improvement of medical imaging analysis and the design of vascular trees for artificial organs. PMID:26998317

  16. The application of simulated annealing to the conformational analysis of disaccharides

    NASA Astrophysics Data System (ADS)

    Naidoo, Kevin J.; Brady, J. W.

    1997-12-01

    Limitations in experimental measurements complicate the investigations of biologically important disaccharides. Previous computational studies of disaccharides have been very CPU intensive. The application of simulated annealing to conformational space studies of disaccharides is shown to decrease the computational effort by more than an order of magnitude. At the same time this approach produces a more accurate description of the conformational space particularly in the high energy regions. The method is demonstrated on the biologically important GlcNac-β-(1-4)-GlcNac disaccharide and can be easily translated for application to other carbohydrates or related molecules. The energy surface E( Π, ψ) for the GlcNac-β-(1-4)-GlcNac disaccharide which is a function of the glycosidic linkage is analysed and compared to a similar surface generated by the standard computationally more intense exhaustive search method.

  17. Simulated annealing applied to IMRT beam angle optimization: A computational study.

    PubMed

    Dias, Joana; Rocha, Humberto; Ferreira, Brígida; Lopes, Maria do Carmo

    2015-11-01

    Electing irradiation directions to use in IMRT treatments is one of the first decisions to make in treatment planning. Beam angle optimization (BAO) is a difficult problem to tackle from the mathematical optimization point of view. It is highly non-convex, and optimization approaches based on gradient descent methods will probably get trapped in one of the many local minima. Simulated Annealing (SA) is a local search probabilistic procedure that is known to be able to deal with multimodal problems. SA for BAO was retrospectively applied to ten clinical examples of treated cases of head-and neck tumors signalized as complex cases where proper target coverage and organ sparing proved difficult to achieve. The number of directions to use was considered fixed and equal to 5 or 7. It is shown that SA can lead to solutions that significantly improve organ sparing, even considering a reduced number of angles, without jeopardizing tumor coverage.

  18. Extended Information Ratio for Portfolio Optimization Using Simulated Annealing with Constrained Neighborhood

    NASA Astrophysics Data System (ADS)

    Orito, Yukiko; Yamamoto, Hisashi; Tsujimura, Yasuhiro; Kambayashi, Yasushi

    The portfolio optimizations are to determine the proportion-weighted combination in the portfolio in order to achieve investment targets. This optimization is one of the multi-dimensional combinatorial optimizations and it is difficult for the portfolio constructed in the past period to keep its performance in the future period. In order to keep the good performances of portfolios, we propose the extended information ratio as an objective function, using the information ratio, beta, prime beta, or correlation coefficient in this paper. We apply the simulated annealing (SA) to optimize the portfolio employing the proposed ratio. For the SA, we make the neighbor by the operation that changes the structure of the weights in the portfolio. In the numerical experiments, we show that our portfolios keep the good performances when the market trend of the future period becomes different from that of the past period.

  19. On combining support vector machines and simulated annealing in stereovision matching.

    PubMed

    Pajares, Gonzalo; de la Cruz, Jesús M

    2004-08-01

    This paper outlines a method for solving the stereovision matching problem using edge segments as the primitives. In stereovision matching, the following constraints are commonly used: epipolar, similarity, smoothness, ordering, and uniqueness. We propose a new strategy in which such constraints are sequentially combined. The goal is to achieve high performance in terms of correct matches by combining several strategies. The contributions of this paper are reflected in the development of a similarity measure through a support vector machines classification approach; the transformation of the smoothness, ordering and epipolar constraints into the form of an energy function, through an optimization simulated annealing approach, whose minimum value corresponds to a good matching solution and by introducing specific conditions to overcome the violation of the smoothness and ordering constraints. The performance of the proposed method is illustrated by comparative analysis against some recent global matching methods. PMID:15462432

  20. Fabrication of simulated plate fuel elements: Defining role of stress relief annealing

    NASA Astrophysics Data System (ADS)

    Kohli, D.; Rakesh, R.; Sinha, V. P.; Prasad, G. J.; Samajdar, I.

    2014-04-01

    This study involved fabrication of simulated plate fuel elements. Uranium silicide of actual fuel elements was replaced with yttria. The fabrication stages were otherwise identical. The final cold rolled and/or straightened plates, without stress relief, showed an inverse relationship between bond strength and out of plane residual shear stress (τ13). Stress relief of τ13 was conducted over a range of temperatures/times (200-500 °C and 15-240 min) and led to corresponding improvements in bond strength. Fastest τ13 relief was obtained through 300 °C annealing. Elimination of microscopic shear bands, through recovery and partial recrystallization, was clearly the most effective mechanism of relieving τ13.

  1. A parallel algorithm for implicit depletant simulations

    NASA Astrophysics Data System (ADS)

    Glaser, Jens; Karas, Andrew S.; Glotzer, Sharon C.

    2015-11-01

    We present an algorithm to simulate the many-body depletion interaction between anisotropic colloids in an implicit way, integrating out the degrees of freedom of the depletants, which we treat as an ideal gas. Because the depletant particles are statistically independent and the depletion interaction is short-ranged, depletants are randomly inserted in parallel into the excluded volume surrounding a single translated and/or rotated colloid. A configurational bias scheme is used to enhance the acceptance rate. The method is validated and benchmarked both on multi-core processors and graphics processing units for the case of hard spheres, hemispheres, and discoids. With depletants, we report novel cluster phases in which hemispheres first assemble into spheres, which then form ordered hcp/fcc lattices. The method is significantly faster than any method without cluster moves and that tracks depletants explicitly, for systems of colloid packing fraction ϕc < 0.50, and additionally enables simulation of the fluid-solid transition.

  2. Heavy Tails in the Distribution of Time to Solution for Classical and Quantum Annealing.

    PubMed

    Steiger, Damian S; Rønnow, Troels F; Troyer, Matthias

    2015-12-01

    For many optimization algorithms the time to solution depends not only on the problem size but also on the specific problem instance and may vary by many orders of magnitude. It is then necessary to investigate the full distribution and especially its tail. Here, we analyze the distributions of annealing times for simulated annealing and simulated quantum annealing (by path integral quantum Monte Carlo simulation) for random Ising spin glass instances. We find power-law distributions with very heavy tails, corresponding to extremely hard instances, but far broader distributions-and thus worse performance for hard instances-for simulated quantum annealing than for simulated annealing. Fast, nonadiabatic, annealing schedules can improve the performance of simulated quantum annealing for very hard instances by many orders of magnitude.

  3. 3D face recognition using simulated annealing and the surface interpenetration measure.

    PubMed

    Queirolo, Chauã C; Silva, Luciano; Bellon, Olga R P; Segundo, Maurício Pamplona

    2010-02-01

    This paper presents a novel automatic framework to perform 3D face recognition. The proposed method uses a Simulated Annealing-based approach (SA) for range image registration with the Surface Interpenetration Measure (SIM), as similarity measure, in order to match two face images. The authentication score is obtained by combining the SIM values corresponding to the matching of four different face regions: circular and elliptical areas around the nose, forehead, and the entire face region. Then, a modified SA approach is proposed taking advantage of invariant face regions to better handle facial expressions. Comprehensive experiments were performed on the FRGC v2 database, the largest available database of 3D face images composed of 4,007 images with different facial expressions. The experiments simulated both verification and identification systems and the results compared to those reported by state-of-the-art works. By using all of the images in the database, a verification rate of 96.5 percent was achieved at a False Acceptance Rate (FAR) of 0.1 percent. In the identification scenario, a rank-one accuracy of 98.4 percent was achieved. To the best of our knowledge, this is the highest rank-one score ever achieved for the FRGC v2 database when compared to results published in the literature. PMID:20075453

  4. Mathematical foundation of quantum annealing

    SciTech Connect

    Morita, Satoshi; Nishimori, Hidetoshi

    2008-12-15

    Quantum annealing is a generic name of quantum algorithms that use quantum-mechanical fluctuations to search for the solution of an optimization problem. It shares the basic idea with quantum adiabatic evolution studied actively in quantum computation. The present paper reviews the mathematical and theoretical foundations of quantum annealing. In particular, theorems are presented for convergence conditions of quantum annealing to the target optimal state after an infinite-time evolution following the Schroedinger or stochastic (Monte Carlo) dynamics. It is proved that the same asymptotic behavior of the control parameter guarantees convergence for both the Schroedinger dynamics and the stochastic dynamics in spite of the essential difference of these two types of dynamics. Also described are the prescriptions to reduce errors in the final approximate solution obtained after a long but finite dynamical evolution of quantum annealing. It is shown there that we can reduce errors significantly by an ingenious choice of annealing schedule (time dependence of the control parameter) without compromising computational complexity qualitatively. A review is given on the derivation of the convergence condition for classical simulated annealing from the view point of quantum adiabaticity using a classical-quantum mapping.

  5. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  6. Atmospheric channel for bistatic optical communication: simulation algorithms

    NASA Astrophysics Data System (ADS)

    Belov, V. V.; Tarasenkov, M. V.

    2015-11-01

    Three algorithms of statistical simulation of the impulse response (IR) for the atmospheric optical communication channel are considered, including algorithms of local estimate and double local estimate and the algorithm suggested by us. On the example of a homogeneous molecular atmosphere it is demonstrated that algorithms of double local estimate and the suggested algorithm are more efficient than the algorithm of local estimate. For small optical path length, the proposed algorithm is more efficient, and for large optical path length, the algorithm of double local estimate is more efficient. Using the proposed algorithm, the communication quality is estimated for a particular case of the atmospheric channel under conditions of intermediate turbidity. The communication quality is characterized by the maximum IR, time of maximum IR, integral IR, and bandwidth of the communication channel. Calculations of these criteria demonstrated that communication is most efficient when the point of intersection of the directions toward the source and the receiver is most close to the source point.

  7. An interactive system for creating object models from range data based on simulated annealing

    SciTech Connect

    Hoff, W.A.; Hood, F.W.; King, R.H.

    1997-05-01

    In hazardous applications such as remediation of buried waste and dismantlement of radioactive facilities, robots are an attractive solution. Sensing to recognize and locate objects is a critical need for robotic operations in unstructured environments. An accurate 3-D model of objects in the scene is necessary for efficient high level control of robots. Drawing upon concepts from supervisory control, the authors have developed an interactive system for creating object models from range data, based on simulated annealing. Site modeling is a task that is typically performed using purely manual or autonomous techniques, each of which has inherent strengths and weaknesses. However, an interactive modeling system combines the advantages of both manual and autonomous methods, to create a system that has high operator productivity as well as high flexibility and robustness. The system is unique in that it can work with very sparse range data, tolerate occlusions, and tolerate cluttered scenes. The authors have performed an informal evaluation with four operators on 16 different scenes, and have shown that the interactive system is superior to either manual or automatic methods in terms of task time and accuracy.

  8. Equilibrium properties of transition-metal ion-argon clusters via simulated annealing

    NASA Technical Reports Server (NTRS)

    Asher, Robert L.; Micha, David A.; Brucat, Philip J.

    1992-01-01

    The geometrical structures of M(+) (Ar)n ions, with n = 1-14, have been studied by the minimization of a many-body potential surface with a simulated annealing procedure. The minimization method is justified for finite systems through the use of an information theory approach. It is carried out for eight potential-energy surfaces constructed with two- and three-body terms parametrized from experimental data and ab initio results. The potentials should be representative of clusters of argon atoms with first-row transition-metal monocations of varying size. The calculated geometries for M(+) = Co(+) and V(+) possess radial shells with small (ca. 4-8) first-shell coordination number. The inclusion of an ion-induced-dipole-ion-induced-dipole interaction between argon atoms raises the energy and generally lowers the symmetry of the cluster by promoting incomplete shell closure. Rotational constants as well as electric dipole and quadrupole moments are quoted for the Co(+) (Ar)n and V(+) (Ar)n predicted structures.

  9. Cascade annealing simulations of bcc iron using object kinetic Monte Carlo

    SciTech Connect

    Xu, Haixuan; Osetskiy, Yury N; Stoller, Roger E

    2012-01-01

    Simulations of displacement cascade annealing were carried out using object kinetic Monte Carlo based on an extensive MD database including various primary knock-on atom energies and directions. The sensitivity of the results to a broad range of material and model parameters was examined. The diffusion mechanism of interstitial clusters has been identified to have the most significant impact on the fraction of stable interstitials that escape the cascade region. The maximum level of recombination was observed for the limiting case in which all interstitial clusters exhibit 3D random walk diffusion. The OKMC model was parameterized using two alternative sets of defect migration and binding energies, one from ab initio calculations and the second from an empirical potential. The two sets of data predict essentially the same fraction of surviving defects but different times associated with the defect escape processes. This study provides a comprehensive picture of the first phase of long-term defect evolution in bcc iron and generates information that can be used as input data for mean field rate theory (MFRT) to predict the microstructure evolution of materials under irradiation. In addition, the limitations of the current OKMC model are discussed and a potential way to overcome these limitations is outlined.

  10. Molecular dynamics simulations of solid state recrystallization I: Observation of grain growth in annealed iron nanoparticles

    SciTech Connect

    Huang Jinfan; Bartell, Lawrence S.

    2012-01-15

    Molecular dynamics simulations of solid state recrystallization and grain growth in iron nanoparticles containing 1436 atoms were carried out. During the period of relaxation of supercooled liquid drops and during thermal annealing of the solids they froze to, changes in disorder were followed by monitoring changes in energy and the migration of grain boundaries. All 27 polycrystalline nanoparticles, which were generated with different grain boundaries, were observed to recystallize into single crystals during annealing. Larger grains consumed the smaller ones. In particular, two sets of solid particles, designated as A and B, each with two grains, were treated to generate 18 members of each set with different thermal histories. This provided small ensembles (of 18 members each) from which rates at which the larger grain engulfed the smaller one, could be determined. The rate was higher, the smaller the degree of misorientation between the grains, a result contrary to the general rule based on published experiments, but the reason was clear. Crystal A, which happened to have a somewhat lower angle of misorientation, also had a higher population of defects, as confirmed by its higher energy. Accordingly, its driving force to recrystallize was greater. Although the mechanism of recrystallization is commonly called nucleation, our results, which probe the system on an atomic scale, were not able to identify nuclei unequivocally. By contrast, our technique can and does reveal nuclei in the freezing of liquids and in transformations from one solid phase to another. An alternative rationale for a nucleation-like process in our results is proposed. - Graphical Abstract: Time dependence of energy per atom in the quenching of liquid nanoparticles A-C of iron. Nanoparticle C freezes directly into a single crystal but A and B freeze to solids with two grains. A and B eventually recrystallize into single crystals. Highlights: Black-Right-Pointing-Pointer Solid state material

  11. A splitting algorithm for Vlasov simulation with filamentation filtration

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Farrell, W. M.

    1994-01-01

    A Fourier-Fourier transformed version of the splitting algorithm for simulating solutions of the Vlasov-Poisson system of equations is introduced. It is shown that with the inclusion of filamentation filtration in this transformed algorithm it is both faster and more stable than the standard splitting algorithm. It is further shown that in a scalar computer environment this new algorithm is approximately equal in speed and far less noisy than its particle-in-cell counterpart. It is conjectured that in a multiprocessor environment the filtered splitting algorithm would be faster while producing more precise results.

  12. Performance of a parallel algorithm for standard cell placement on the Intel Hypercube

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1987-01-01

    A parallel simulated annealing algorithm for standard cell placement on the Intel Hypercube is presented. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than uniprocessor simulated annealing algorithms.

  13. Definition of general topological equivalence in protein structures. A procedure involving comparison of properties and relationships through simulated annealing and dynamic programming.

    PubMed

    Sali, A; Blundell, T L

    1990-03-20

    A protein is defined as an indexed string of elements at each level in the hierarchy of protein structure: sequence, secondary structure, super-secondary structure, etc. The elements, for example, residues or secondary structure segments such as helices or beta-strands, are associated with a series of properties and can be involved in a number of relationships with other elements. Element-by-element dissimilarity matrices are then computed and used in the alignment procedure based on the sequence alignment algorithm of Needleman & Wunsch, expanded by the simulated annealing technique to take into account relationships as well as properties. The utility of this method for exploring the variability of various aspects of protein structure and for comparing distantly related proteins is demonstrated by multiple alignment of serine proteinases, aspartic proteinase lobes and globins.

  14. Duality quantum algorithm efficiently simulates open quantum systems

    NASA Astrophysics Data System (ADS)

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-07-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm.

  15. Duality quantum algorithm efficiently simulates open quantum systems.

    PubMed

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d(3)) in contrast to O(d(4)) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  16. Duality quantum algorithm efficiently simulates open quantum systems.

    PubMed

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-07-28

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d(3)) in contrast to O(d(4)) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm.

  17. Duality quantum algorithm efficiently simulates open quantum systems

    PubMed Central

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  18. Automated integration of genomic physical mapping data via parallel simulated annealing

    SciTech Connect

    Slezak, T.

    1994-06-01

    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  19. Architecture and algorithm of a circuit simulator

    NASA Astrophysics Data System (ADS)

    Marranghello, Norian; Damiani, Furio

    1990-11-01

    Software-based circuit simulators had a ten-fold speed improvement in the last 15 years. Despite this they are not fast enough to cost- effectively deal with current VLSI circuits. In this paper we describe the current status of the ABACUS circuit simulator project, which takes advantage of both a dedicated hardware to speed up circuit simulation and a new methodology, where each parallel processor behaves like a circuit element.

  20. Electrode materials, thermal annealing sequences, and lateral/vertical phase separation of polymer solar cells from multiscale molecular simulations.

    PubMed

    Lee, Cheng-Kuang; Wodo, Olga; Ganapathysubramanian, Baskar; Pao, Chun-Wei

    2014-12-10

    The nanomorphologies of the bulk heterojunction (BHJ) layer of polymer solar cells are extremely sensitive to the electrode materials and thermal annealing conditions. In this work, the correlations of electrode materials, thermal annealing sequences, and resultant BHJ nanomorphological details of P3HT:PCBM BHJ polymer solar cell are studied by a series of large-scale, coarse-grained (CG) molecular simulations of system comprised of PEDOT:PSS/P3HT:PCBM/Al layers. Simulations are performed for various configurations of electrode materials as well as processing temperature. The complex CG molecular data are characterized using a novel extension of our graph-based framework to quantify morphology and establish a link between morphology and processing conditions. Our analysis indicates that vertical phase segregation of P3HT:PCBM blend strongly depends on the electrode material and thermal annealing schedule. A thin P3HT-rich film is formed on the top, regardless of bottom electrode material, when the BHJ layer is exposed to the free surface during thermal annealing. In addition, preferential segregation of P3HT chains and PCBM molecules toward PEDOT:PSS and Al electrodes, respectively, is observed. Detailed morphology analysis indicated that, surprisingly, vertical phase segregation does not affect the connectivity of donor/acceptor domains with respective electrodes. However, the formation of P3HT/PCBM depletion zones next to the P3HT/PCBM-rich zones can be a potential bottleneck for electron/hole transport due to increase in transport pathway length. Analysis in terms of fraction of intra- and interchain charge transports revealed that processing schedule affects the average vertical orientation of polymer chains, which may be crucial for enhanced charge transport, nongeminate recombination, and charge collection. The present study establishes a more detailed link between processing and morphology by combining multiscale molecular simulation framework with an

  1. Fast simulated annealing and adaptive Monte Carlo sampling based parameter optimization for dense optical-flow deformable image registration of 4DCT lung anatomy

    NASA Astrophysics Data System (ADS)

    Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.

    2016-03-01

    Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.

  2. Extrapolated gradientlike algorithms for molecular dynamics and celestial mechanics simulations.

    PubMed

    Omelyan, I P

    2006-09-01

    A class of symplectic algorithms is introduced to integrate the equations of motion in many-body systems. The algorithms are derived on the basis of an advanced gradientlike decomposition approach. Its main advantage over the standard gradient scheme is the avoidance of time-consuming evaluations of force gradients by force extrapolation without any loss of precision. As a result, the efficiency of the integration improves significantly. The algorithms obtained are analyzed and optimized using an error-function theory. The best among them are tested in actual molecular dynamics and celestial mechanics simulations for comparison with well-known nongradient and gradient algorithms such as the Störmer-Verlet, Runge-Kutta, Cowell-Numerov, Forest-Ruth, Suzuki-Chin, and others. It is demonstrated that for moderate and high accuracy, the extrapolated algorithms should be considered as the most efficient for the integration of motion in molecular dynamics simulations. PMID:17025782

  3. Extrapolated gradientlike algorithms for molecular dynamics and celestial mechanics simulations.

    PubMed

    Omelyan, I P

    2006-09-01

    A class of symplectic algorithms is introduced to integrate the equations of motion in many-body systems. The algorithms are derived on the basis of an advanced gradientlike decomposition approach. Its main advantage over the standard gradient scheme is the avoidance of time-consuming evaluations of force gradients by force extrapolation without any loss of precision. As a result, the efficiency of the integration improves significantly. The algorithms obtained are analyzed and optimized using an error-function theory. The best among them are tested in actual molecular dynamics and celestial mechanics simulations for comparison with well-known nongradient and gradient algorithms such as the Störmer-Verlet, Runge-Kutta, Cowell-Numerov, Forest-Ruth, Suzuki-Chin, and others. It is demonstrated that for moderate and high accuracy, the extrapolated algorithms should be considered as the most efficient for the integration of motion in molecular dynamics simulations.

  4. Fully explicit algorithms for fluid simulation

    NASA Astrophysics Data System (ADS)

    Clausen, Jonathan

    2011-11-01

    Computing hardware is trending towards distributed, massively parallel architectures in order to achieve high computational throughput. For example, Intrepid at Argonne uses 163,840 cores, and next generation machines, such as Sequoia at Lawrence Livermore, will use over one million cores. Harnessing the increasingly parallel nature of computational resources will require algorithms that scale efficiently on these architectures. The advent of GPU-based computation will serve to accelerate this behavior, as a single GPU contains hundreds of processor ``cores.'' Explicit algorithms avoid the communication associated with a linear solve, thus parallel scalability of these algorithms is typically high. This work will explore the efficiency and accuracy of three explicit solution methodologies for the Navier-Stokes equations: traditional artificial compressibility schemes, the lattice-Boltzmann method, and the recently proposed kinetically reduced local Navier-Stokes equations [Borok, Ansumali, and Karlin (2007)]. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  5. Daylighting simulation: methods, algorithms, and resources

    SciTech Connect

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of

  6. Automated medial axis seeding and guided evolutionary simulated annealing for optimization of gamma knife radiosurgery treatment plans

    NASA Astrophysics Data System (ADS)

    Zhang, Pengpeng

    The Leksell Gamma KnifeRTM (LGK) is a tool for providing accurate stereotactic radiosurgical treatment of brain lesions, especially tumors. Currently, the treatment planning team "forward" plans radiation treatment parameters while viewing a series of 2D MR scans. This primarily manual process is cumbersome and time consuming because the difficulty in visualizing the large search space for the radiation parameters (i.e., shot overlap, number, location, size, and weight). I hypothesize that a computer-aided "inverse" planning procedure that utilizes tumor geometry and treatment goals could significantly improve the planning process and therapeutic outcome of LGK radiosurgery. My basic observation is that the treatment team is best at identification of the location of the lesion and prescribing a lethal, yet safe, radiation dose. The treatment planning computer is best at determining both the 3D tumor geometry and optimal LGK shot parameters necessary to deliver a desirable dose pattern to the tumor while sparing adjacent normal tissue. My treatment planning procedure asks the neurosurgeon to identify the tumor and critical structures in MR images and the oncologist to prescribe a tumoricidal radiation dose. Computer-assistance begins with geometric modeling of the 3D tumor's medial axis properties. This begins with a new algorithm, a Gradient-Phase Plot (G-P Plot) decomposition of the tumor object's medial axis. I have found that medial axis seeding, while insufficient in most cases to produce an acceptable treatment plan, greatly reduces the solution space for Guided Evolutionary Simulated Annealing (GESA) treatment plan optimization by specifying an initial estimate for shot number, size, and location, but not weight. They are used to generate multiple initial plans which become initial seed plans for GESA. The shot location and weight parameters evolve and compete in the GESA procedure. The GESA objective function optimizes tumor irradiation (i.e., as close to

  7. Concluding Report: Quantitative Tomography Simulations and Reconstruction Algorithms

    SciTech Connect

    Aufderheide, M B; Martz, H E; Slone, D M; Jackson, J A; Schach von Wittenau, A E; Goodman, D M; Logan, C M; Hall, J M

    2002-02-01

    In this report we describe the original goals and final achievements of this Laboratory Directed Research and Development project. The Quantitative was Tomography Simulations and Reconstruction Algorithms project (99-ERD-015) funded as a multi-directorate, three-year effort to advance the state of the art in radiographic simulation and tomographic reconstruction by improving simulation and including this simulation in the tomographic reconstruction process. Goals were to improve the accuracy of radiographic simulation, and to couple advanced radiographic simulation tools with a robust, many-variable optimization algorithm. In this project, we were able to demonstrate accuracy in X-Ray simulation at the 2% level, which is an improvement of roughly a factor of 5 in accuracy, and we have successfully coupled our simulation tools with the CCG (Constrained Conjugate Gradient) optimization algorithm, allowing reconstructions that include spectral effects and blurring in the reconstructions. Another result of the project was the assembly of a low-scatter X-Ray imaging facility for use in nondestructive evaluation applications. We conclude with a discussion of future work.

  8. Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"

    SciTech Connect

    Petzold, Linda R.

    2012-10-25

    Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.

  9. An algorithm for simulating fracture of cohesive-frictional materials

    SciTech Connect

    Nukala, Phani K; Sampath, Rahul S; Barai, Pallab

    2010-01-01

    Fracture of disordered frictional granular materials is dominated by interfacial failure response that is characterized by de-cohesion followed by frictional sliding response. To capture such an interfacial failure response, we introduce a cohesive-friction random fuse model (CFRFM), wherein the cohesive response of the interface is represented by a linear stress-strain response until a failure threshold, which is then followed by a constant response at a threshold lower than the initial failure threshold to represent the interfacial frictional sliding mechanism. This paper presents an efficient algorithm for simulating fracture of such disordered frictional granular materials using the CFRFM. We note that, when applied to perfectly plastic disordered materials, our algorithm is both theoretically and numerically equivalent to the traditional tangent algorithm (Roux and Hansen 1992 J. Physique II 2 1007) used for such simulations. However, the algorithm is general and is capable of modeling discontinuous interfacial response. Our numerical simulations using the algorithm indicate that the local and global roughness exponents ({zeta}{sub loc} and {zeta}, respectively) of the fracture surface are equal to each other, and the two-dimensional crack roughness exponent is estimated to be {zeta}{sub loc} = {zeta} = 0.69 {+-} 0.03.

  10. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over

  11. 1-Dimensional simulation of thermal annealing in a commercial nuclear power plant reactor pressure vessel wall section

    SciTech Connect

    Nakos, J.T.; Rosinski, S.T.; Acton, R.U.

    1994-11-01

    The objective of this work was to provide experimental heat transfer boundary condition and reactor pressure vessel (RPV) section thermal response data that can be used to benchmark computer codes that simulate thermal annealing of RPVS. This specific protect was designed to provide the Electric Power Research Institute (EPRI) with experimental data that could be used to support the development of a thermal annealing model. A secondary benefit is to provide additional experimental data (e.g., thermal response of concrete reactor cavity wall) that could be of use in an annealing demonstration project. The setup comprised a heater assembly, a 1.2 in {times} 1.2 m {times} 17.1 cm thick [4 ft {times} 4 ft {times} 6.75 in] section of an RPV (A533B ferritic steel with stainless steel cladding), a mockup of the {open_quotes}mirror{close_quotes} insulation between the RPV and the concrete reactor cavity wall, and a 25.4 cm [10 in] thick concrete wall, 2.1 in {times} 2.1 in [10 ft {times} 10 ft] square. Experiments were performed at temperature heat-up/cooldown rates of 7, 14, and 28{degrees}C/hr [12.5, 25, and 50{degrees}F/hr] as measured on the heated face. A peak temperature of 454{degrees}C [850{degrees}F] was maintained on the heated face until the concrete wall temperature reached equilibrium. Results are most representative of those RPV locations where the heat transfer would be 1-dimensional. Temperature was measured at multiple locations on the heated and unheated faces of the RPV section and the concrete wall. Incident heat flux was measured on the heated face, and absorbed heat flux estimates were generated from temperature measurements and an inverse heat conduction code. Through-wall temperature differences, concrete wall temperature response, heat flux absorbed into the RPV surface and incident on the surface are presented. All of these data are useful to modelers developing codes to simulate RPV annealing.

  12. Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.

    1997-01-01

    The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.

  13. Computational algorithms to simulate the steel continuous casting

    NASA Astrophysics Data System (ADS)

    Ramírez-López, A.; Soto-Cortés, G.; Palomar-Pardavé, M.; Romero-Romo, M. A.; Aguilar-López, R.

    2010-10-01

    Computational simulation is a very powerful tool to analyze industrial processes to reduce operating risks and improve profits from equipment. The present work describes the development of some computational algorithms based on the numerical method to create a simulator for the continuous casting process, which is the most popular method to produce steel products for metallurgical industries. The kinematics of industrial processing was computationally reproduced using subroutines logically programmed. The cast steel by each strand was calculated using an iterative method nested in the main loop. The process was repeated at each time step (Δ t) to calculate the casting time, simultaneously, the steel billets produced were counted and stored. The subroutines were used for creating a computational representation of a continuous casting plant (CCP) and displaying the simulation of the steel displacement through the CCP. These algorithms have been developed to create a simulator using the programming language C++. Algorithms for computer animation of the continuous casting process were created using a graphical user interface (GUI). Finally, the simulator functionality was shown and validated by comparing with the industrial information of the steel production of three casters.

  14. A spectral unaveraged algorithm for free electron laser simulations

    SciTech Connect

    Andriyash, I.A.; Lehe, R.; Malka, V.

    2015-02-01

    We propose and discuss a numerical method to model electromagnetic emission from the oscillating relativistic charged particles and its coherent amplification. The developed technique is well suited for free electron laser simulations, but it may also be useful for a wider range of physical problems involving resonant field–particles interactions. The algorithm integrates the unaveraged coupled equations for the particles and the electromagnetic fields in a discrete spectral domain. Using this algorithm, it is possible to perform full three-dimensional or axisymmetric simulations of short-wavelength amplification. In this paper we describe the method, its implementation, and we present examples of free electron laser simulations comparing the results with the ones provided by commonly known free electron laser codes.

  15. Stochastic simulation algorithm for the quantum linear Boltzmann equation.

    PubMed

    Busse, Marc; Pietrulewicz, Piotr; Breuer, Heinz-Peter; Hornberger, Klaus

    2010-08-01

    We develop a Monte Carlo wave function algorithm for the quantum linear Boltzmann equation, a Markovian master equation describing the quantum motion of a test particle interacting with the particles of an environmental background gas. The algorithm leads to a numerically efficient stochastic simulation procedure for the most general form of this integrodifferential equation, which involves a five-dimensional integral over microscopically defined scattering amplitudes that account for the gas interactions in a nonperturbative fashion. The simulation technique is used to assess various limiting forms of the quantum linear Boltzmann equation, such as the limits of pure collisional decoherence and quantum Brownian motion, the Born approximation, and the classical limit. Moreover, we extend the method to allow for the simulation of the dissipative and decohering dynamics of superpositions of spatially localized wave packets, which enables the study of many physically relevant quantum phenomena, occurring e.g., in the interferometry of massive particles.

  16. Simulation of Algorithms for Pulse Timing in FPGAs.

    PubMed

    Haselman, Michael D; Hauck, Scott; Lewellen, Thomas K; Miyaoka, Robert S

    2007-01-01

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates well above 100MHz. This, combined with FPGA's low expense and ease of use, make them an ideal technology for pulse timing and are a central part of our next generation of electronics for our pre-clinical PET scanner systems. To that end, our laboratory has been developing a pulse timing technique that uses pulse fitting to achieve timing resolution well below the sampling period of the analog to digital converter (ADC). While ADCs with sampling rates in excess of 400MS/s exist, we feel that using ADCs with lowing sampling rates has many advantages for positron emission tomography (PET) scanners. It is with this premise that we have started simulating timing algorithms using MATLAB in order to optimize the parameters before implementing the algorithm in Verilog. MATLAB simulations allow us to quickly investigate filter designs, ADC sampling rates and algorithms with real data before implementation in hardware. We report our results for a least squares fitting algorithm and a new version of a leading edge detector of PMT pulses.

  17. A generic algorithm for Monte Carlo simulation of proton transport

    NASA Astrophysics Data System (ADS)

    Salvat, Francesc

    2013-12-01

    A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.

  18. A performance comparison of integration algorithms in simulating flexible structures

    NASA Technical Reports Server (NTRS)

    Howe, R. M.

    1989-01-01

    Asymptotic formulas for the characteristic root errors as well as transfer function gain and phase errors are presented for a number of traditional and new integration methods. Normalized stability regions in the lambda h plane are compared for the various methods. In particular, it is shown that a modified form of Euler integration with root matching is an especially efficient method for simulating lightly-damped structural modes. The method has been used successfully for structural bending modes in the real-time simulation of missiles. Performance of this algorithm is compared with other special algorithms, including the state-transition method. A predictor-corrector version of the modified Euler algorithm permits it to be extended to the simulation of nonlinear models of the type likely to be obtained when using the discretized structure approach. Performance of the different integration methods is also compared for integration step sizes larger than those for which the asymptotic formulas are valid. It is concluded that many traditional integration methods, such as RD-4, are not competitive in the simulation of lightly damped structures.

  19. Performance of a parallel algorithm for standard cell placement on the Intel Hypercube

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1987-01-01

    A parallel simulated annealing algorithm for standard cell placement that is targeted to run on the Intel Hypercube is presented. A tree broadcasting strategy that is used extensively in our algorithm for updating cell locations in the parallel environment is presented. Studies on the performance of our algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms.

  20. Coalescent simulation in continuous space: algorithms for large neighbourhood size.

    PubMed

    Kelleher, J; Etheridge, A M; Barton, N H

    2014-08-01

    Many species have an essentially continuous distribution in space, in which there are no natural divisions between randomly mating subpopulations. Yet, the standard approach to modelling these populations is to impose an arbitrary grid of demes, adjusting deme sizes and migration rates in an attempt to capture the important features of the population. Such indirect methods are required because of the failure of the classical models of isolation by distance, which have been shown to have major technical flaws. A recently introduced model of extinction and recolonisation in two dimensions solves these technical problems, and provides a rigorous technical foundation for the study of populations evolving in a spatial continuum. The coalescent process for this model is simply stated, but direct simulation is very inefficient for large neighbourhood sizes. We present efficient and exact algorithms to simulate this coalescent process for arbitrary sample sizes and numbers of loci, and analyse these algorithms in detail. PMID:24910324

  1. Potts-model grain growth simulations: Parallel algorithms and applications

    SciTech Connect

    Wright, S.A.; Plimpton, S.J.; Swiler, T.P.

    1997-08-01

    Microstructural morphology and grain boundary properties often control the service properties of engineered materials. This report uses the Potts-model to simulate the development of microstructures in realistic materials. Three areas of microstructural morphology simulations were studied. They include the development of massively parallel algorithms for Potts-model grain grow simulations, modeling of mass transport via diffusion in these simulated microstructures, and the development of a gradient-dependent Hamiltonian to simulate columnar grain growth. Potts grain growth models for massively parallel supercomputers were developed for the conventional Potts-model in both two and three dimensions. Simulations using these parallel codes showed self similar grain growth and no finite size effects for previously unapproachable large scale problems. In addition, new enhancements to the conventional Metropolis algorithm used in the Potts-model were developed to accelerate the calculations. These techniques enable both the sequential and parallel algorithms to run faster and use essentially an infinite number of grain orientation values to avoid non-physical grain coalescence events. Mass transport phenomena in polycrystalline materials were studied in two dimensions using numerical diffusion techniques on microstructures generated using the Potts-model. The results of the mass transport modeling showed excellent quantitative agreement with one dimensional diffusion problems, however the results also suggest that transient multi-dimension diffusion effects cannot be parameterized as the product of the grain boundary diffusion coefficient and the grain boundary width. Instead, both properties are required. Gradient-dependent grain growth mechanisms were included in the Potts-model by adding an extra term to the Hamiltonian. Under normal grain growth, the primary driving term is the curvature of the grain boundary, which is included in the standard Potts-model Hamiltonian.

  2. Exploring photometric redshifts as an optimization problem: an ensemble MCMC and simulated annealing-driven template-fitting approach

    NASA Astrophysics Data System (ADS)

    Speagle, Joshua S.; Capak, Peter L.; Eisenstein, Daniel J.; Masters, Daniel C.; Steinhardt, Charles L.

    2016-10-01

    Using a 4D grid of ˜2 million model parameters (Δz = 0.005) adapted from Cosmological Origins Survey photometric redshift (photo-z) searches, we investigate the general properties of template-based photo-z likelihood surfaces. We find these surfaces are filled with numerous local minima and large degeneracies that generally confound simplistic gradient-descent optimization schemes. We combine ensemble Markov Chain Monte Carlo sampling with simulated annealing to robustly and efficiently explore these surfaces in approximately constant time. Using a mock catalogue of 384 662 objects, we show our approach samples ˜40 times more efficiently compared to a `brute-force' counterpart while maintaining similar levels of accuracy. Our results represent first steps towards designing template-fitting photo-z approaches limited mainly by memory constraints rather than computation time.

  3. SMMR Simulator radiative transfer calibration model. 2: Algorithm development

    NASA Technical Reports Server (NTRS)

    Link, S.; Calhoon, C.; Krupp, B.

    1980-01-01

    Passive microwave measurements performed from Earth orbit can be used to provide global data on a wide range of geophysical and meteorological phenomena. A Scanning Multichannel Microwave Radiometer (SMMR) is being flown on the Nimbus-G satellite. The SMMR Simulator duplicates the frequency bands utilized in the spacecraft instruments through an amalgamate of radiometer systems. The algorithm developed utilizes data from the fall 1978 NASA CV-990 Nimbus-G underflight test series and subsequent laboratory testing.

  4. A robust hybrid fuzzy-simulated annealing-intelligent water drops approach for tuning a distribution static compensator nonlinear controller in a distribution system

    NASA Astrophysics Data System (ADS)

    Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza

    2016-06-01

    This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.

  5. Auto-accumulation method using simulated annealing enables fully automatic particle pickup completely free from a matching template or learning data.

    PubMed

    Ogura, Toshihiko; Sato, Chikara

    2004-06-01

    Single-particle analysis is a 3-D structure determining method using electron microscopy (EM). In this method, a large number of projections is required to create 3-D reconstruction. In order to enable completely automatic pickup without a matching template or a training data set, we established a brand-new method in which the frames to pickup particles are randomly shifted and rotated over the electron micrograph and, using the total average image of the framed images as an index, each frame reaches a particle. In this process, shifts are selected to increase the contrast of the average. By iterated shifts and further selection of the shifts, the frames are induced to shift so as to surround particles. In this algorithm, hundreds of frames are initially distributed randomly over the electron micrograph in which multi-particle images are dispersed. Starting with these frames, one of them is selected and shifted randomly, and acceptance or non-acceptance of its new position is judged using the simulated annealing (SA) method in which the contrast score of the total average image is adopted as an index. After iteration of this process, the position of each frame converges so as to surround a particle and the framed images are picked up. This method is the first unsupervised fully automatic particle picking method which is applicable to EM of various kinds of proteins, especially to low-contrasted cryo-EM protein images.

  6. Sampling of general correlators in worm-algorithm based simulations

    NASA Astrophysics Data System (ADS)

    Rindlisbacher, Tobias; Åkerlund, Oscar; de Forcrand, Philippe

    2016-08-01

    Using the complex ϕ4-model as a prototype for a system which is simulated by a worm algorithm, we show that not only the charged correlator <ϕ* (x) ϕ (y) >, but also more general correlators such as < | ϕ (x) | | ϕ (y) | > or < arg ⁡ (ϕ (x)) arg ⁡ (ϕ (y)) >, as well as condensates like < | ϕ | >, can be measured at every step of the Monte Carlo evolution of the worm instead of on closed-worm configurations only. The method generalizes straightforwardly to other systems simulated by worms, such as spin or sigma models.

  7. Simulation of multicorrelated random processes using the FFT algorithm

    NASA Technical Reports Server (NTRS)

    Wittig, L. E.; Sinha, A. K.

    1975-01-01

    A technique for the digital simulation of multicorrelated Gaussian random processes is described. This technique is based upon generating discrete frequency functions which correspond to the Fourier transform of the desired random processes, and then using the fast Fourier transform (FFT) algorithm to obtain the actual random processes. The main advantage of this method of simulation over other methods is computation time; it appears to be more than an order of magnitude faster than present methods of simulation. One of the main uses of multicorrelated simulated random processes is in solving nonlinear random vibration problems by numerical integration of the governing differential equations. The response of a nonlinear string to a distributed noise input is presented as an example.

  8. Motion Cueing Algorithm Modification for Improved Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Zaychik, Kirill; Kelly, Lon C.; Houck, Jacob

    2009-01-01

    Atmospheric turbulence cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. Cardullo and Ellor initially addressed this problem by directly porting the turbulence model output to the motion system. Reid and Robinson addressed the problem by employing a parallel aircraft model, which is only stimulated by the turbulence inputs and adding a filter specially designed to pass the higher turbulence frequencies. There have been advances in motion cueing algorithm development at the Man-Machine Systems Laboratory, at SUNY Binghamton. In particular, the system used to generate turbulence cues has been studied. The Reid approach, implemented by Telban and Cardullo, was employed to augment the optimal motion cueing algorithm installed at the NASA LaRC Simulation Laboratory, driving the Visual Motion Simulator. In this implementation, the output of the primary flight channel was added to the output of the turbulence channel and then sent through a non-linear cueing filter. The cueing filter is an adaptive filter; therefore, it is not desirable for the output of the turbulence channel to be augmented by this type of filter. The likelihood of the signal becoming divergent was also an issue in this design. After testing on-site it became apparent that the architecture of the turbulence algorithm was generating unacceptable cues. As mentioned above, this cueing algorithm comprised a filter that was designed to operate at low bandwidth. Therefore, the turbulence was also filtered, augmenting the cues generated by the model. If any filtering is to be done to the turbulence, it will utilize a filter with a much higher bandwidth, above the frequencies produced by the aircraft response to turbulence. The authors have developed an implementation wherein only the signal from the primary flight channel passes through the nonlinear cueing filter. This paper discusses three

  9. Massively parallel algorithms for trace-driven cache simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  10. Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.

    2012-01-01

    Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.

  11. An Initial Examination for Verifying Separation Algorithms by Simulation

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Neogi, Natasha; Herencia-Zapana, Heber

    2012-01-01

    An open question in algorithms for aircraft is what can be validated by simulation where the simulation shows that the probability of undesirable events is below some given level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first proposes a goal based on the number of flights per year in several regions. The paper examines the probabilistic interpretation of this goal and computes the number of trials needed to establish it at an equivalent confidence level. Since any simulation is likely to consider the algorithms for only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. This paper is an initial effort, and as such, it considers separation maneuvers, which are elementary but include numerous aspects of aircraft behavior. The scenario includes decisions under uncertainty since the position of each aircraft is only known to the other by broadcasting where GPS believes each aircraft to be (ADS-B). Each aircraft operates under feedback control with perturbations. It is shown that a scenario three or four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.

  12. Microwave annealing

    NASA Astrophysics Data System (ADS)

    Lee, Yao-Jen; Cho, T.-C.; Chuang, S.-S.; Hsueh, F.-K.; Lu, Y.-L.; Sung, P.-J.; Chen, S.-J.; Lo, C.-H.; Lai, C.-H.; Current, Michael I.; Tseng, T.-Y.; Chao, T.-S.; Yang, F.-L.

    2012-11-01

    Microwave annealing of dopants in Si has been reported to produce highly activated junctions at temperatures far below those needed for comparable results using conventional thermal processes. However the details of the kinetics and mechanisms for microwave annealing are far from well understood. Comparisons between MWA and RTA of dopants in implanted Si has been investigated to produce highly activated junctions. First, As, 31P, and BF 2 implants in Si substrate were annealed by MWA at temperatures below 550 °C.

  13. A New Simulation Algorithm Combining Fluid and Kinetic Properties

    NASA Astrophysics Data System (ADS)

    Larson, David; Hewett, Dennis

    2007-11-01

    Complex Particle Kinetics (CPK) [1,2] uses particles with internal degrees of freedom in an effort to simulate the transition between continuum and kinetic dynamics. Recent work [3] has provided a new path towards extending the adaptive particle capabilities of CPK. The resulting algorithm bridges the gap between fluid and kinetic regimes. The method uses an ensemble of macro-particles with a Gaussian spatial profile and a Mawellian velocity distribution to represent particle distributions in phase space. In addition to the standard PIC quantities of location, drift velocity, mass, and charge, the macro-particles also carry width, thermal velocity, and an internal velocity. The particle shape, internal velocity, and drift velocity respond to internal and eternal forces. The particles can contract, expand, rotate, and pass through one another. The algorithm allows arbitrary collisionality and functions effectively in the collision-dominated limit. We will present details of the algorithm as well as the results from several simulations. [1] D. W. Hewett, J. Comp. Phys. 189 (2003). [2] D. J. Larson, J. Comp. Phys. 188 (2003). [3] C. Gauger, et.al., SIAM J. Numer. Anal. 37 (2000).

  14. Monte Carlo simulation algorithm for B-DNA.

    PubMed

    Howell, Steven C; Qiu, Xiangyun; Curtis, Joseph E

    2016-11-01

    Understanding the structure-function relationship of biomolecules containing DNA has motivated experiments aimed at determining molecular structure using methods such as small-angle X-ray and neutron scattering (SAXS and SANS). SAXS and SANS are useful for determining macromolecular shape in solution, a process which benefits by using atomistic models that reproduce the scattering data. The variety of algorithms available for creating and modifying model DNA structures lack the ability to rapidly modify all-atom models to generate structure ensembles. This article describes a Monte Carlo algorithm for simulating DNA, not with the goal of predicting an equilibrium structure, but rather to generate an ensemble of plausible structures which can be filtered using experimental results to identify a sub-ensemble of conformations that reproduce the solution scattering of DNA macromolecules. The algorithm generates an ensemble of atomic structures through an iterative cycle in which B-DNA is represented using a wormlike bead-rod model, new configurations are generated by sampling bend and twist moves, then atomic detail is recovered by back mapping from the final coarse-grained configuration. Using this algorithm on commodity computing hardware, one can rapidly generate an ensemble of atomic level models, each model representing a physically realistic configuration that could be further studied using molecular dynamics. © 2016 Wiley Periodicals, Inc. PMID:27671358

  15. To Propose an Algorithm for Team Forming: Simulated Annealing K Team-Forming Algorithm for Heterogeneous Grouping.

    ERIC Educational Resources Information Center

    Zhi-Feng Liu, Eric

    2005-01-01

    In recent studies, some researchers were eager for the answer of how to group a perfectly dream team. There are various grouping methods, e.g. random assignment, homogeneous grouping with personality or achievement and heterogeneous grouping with personality or achievement, were proposed. Some instructors could put some students in a team better…

  16. Simulations and measurements of annealed pyrolytic graphite-metal composite baseplates

    NASA Astrophysics Data System (ADS)

    Streb, F.; Ruhl, G.; Schubert, A.; Zeidler, H.; Penzel, M.; Flemmig, S.; Todaro, I.; Squatrito, R.; Lampke, T.

    2016-03-01

    We investigated the usability of anisotropic materials as inserts in aluminum-matrix-composite baseplates for typical high performance power semiconductor modules using finite-element simulations and transient plane source measurements. For simulations, several physical modules can be used, which are suitable for different thermal boundary conditions. By comparing different modules and options of heat transfer we found non-isothermal simulations to be closest to reality for temperature distribution at the surface of the heat sink. We optimized the geometry of the graphite inserts for best heat dissipation and based on these results evaluated the thermal resistance of a typical power module using calculation time optimized steady-state simulations. Here we investigated the influence of thermal contact conductance (TCC) between metal matrix and inserts on the heat dissipation. We found improved heat dissipation compared to the plain metal baseplate for a TCC of 200 kW/m2/K and above.To verify the simulations we evaluated cast composite baseplates with two different insert geometries and measured their averaged lateral thermal conductivity using a transient plane source (HotDisk) technique at room temperature. For the composite baseplate we achieved local improvements in heat dissipation compared to the plain metal baseplate.

  17. Simulating mesoscopic reaction-diffusion systems using the Gillespie algorithm

    SciTech Connect

    Bernstein, David

    2004-12-12

    We examine an application of the Gillespie algorithm to simulating spatially inhomogeneous reaction-diffusion systems in mesoscopic volumes such as cells and microchambers. The method involves discretizing the chamber into elements and modeling the diffusion of chemical species by the movement of molecules between neighboring elements. These transitions are expressed in the form of a set of reactions which are added to the chemical system. The derivation of the rates of these diffusion reactions is by comparison with a finite volume discretization of the heat equation on an unevenly spaced grid. The diffusion coefficient of each species is allowed to be inhomogeneous in space, including discontinuities. The resulting system is solved by the Gillespie algorithm using the fast direct method. We show that in an appropriate limit the method reproduces exact solutions of the heat equation for a purely diffusive system and the nonlinear reaction-rate equation describing the cubic autocatalytic reaction.

  18. An algorithm for protein engineering: simulations of recursive ensemble mutagenesis.

    PubMed Central

    Arkin, A P; Youvan, D C

    1992-01-01

    An algorithm for protein engineering, termed recursive ensemble mutagenesis, has been developed to produce diverse populations of phenotypically related mutants whose members differ in amino acid sequence. This method uses a feedback mechanism to control successive rounds of combinatorial cassette mutagenesis. Starting from partially randomized "wild-type" DNA sequences, a highly parallel search of sequence space for peptides fitting an experimenter's criteria is performed. Each iteration uses information gained from the previous rounds to search the space more efficiently. Simulations of the technique indicate that, under a variety of conditions, the algorithm can rapidly produce a diverse population of proteins fitting specific criteria. In the experimental analog, genetic selection or screening applied during recursive ensemble mutagenesis should force the evolution of an ensemble of mutants to a targeted cluster of related phenotypes. Images PMID:1502200

  19. A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation

    NASA Astrophysics Data System (ADS)

    Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava

    2015-12-01

    In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.

  20. Concurrent Algorithm For Particle-In-Cell Simulations

    NASA Technical Reports Server (NTRS)

    Liewer, Paulett C.; Decyk, Viktor K.

    1990-01-01

    Separate decompositions used for particle-motion and field calculations. General Concurrent Particle-in-Cell (GCPIC) algorithm used to implement motions of individual plasma particles (ions and electrons) under influence of particle-in-cell (PIC) computer codes on concurrent processors. Simulates motions of individual plasma particles under influence of electromagnetic fields generated by particles themselves. Performed to study variety of nonlinear problems in plasma physics, including magnetic and inertial fusion, plasmas in outer space, propagation of electron and ion beams, free-electron lasers, and particle accelerators.

  1. Irreversible simulated tempering algorithm with skew detailed balance conditions: a learning method of weight factors in simulated tempering

    NASA Astrophysics Data System (ADS)

    Sakai, Yuji; Hukushima, Koji

    2016-09-01

    Recent numerical studies concerning simulated tempering algorithm without the detailed balance condition are reviewed and an irreversible simulated tempering algorithm based on the skew detailed balance condition is described. A method to estimate weight factors in simulated tempering by sequentially implementing the irreversible simulated tempering algorithm is studied in comparison with the conventional simulated tempering algorithm satisfying the detailed balance condition. It is found that the total amount of Monte Carlo steps for estimating the weight factors is successfully reduced by applying the proposed method to an two-dimensional ferromagnetic Ising model.

  2. Hierarchical Stochastic Simulation Algorithm for SBML Models of Genetic Circuits.

    PubMed

    Watanabe, Leandro H; Myers, Chris J

    2014-01-01

    This paper describes a hierarchical stochastic simulation algorithm, which has been implemented within iBioSim, a tool used to model, analyze, and visualize genetic circuits. Many biological analysis tools flatten out hierarchy before simulation, but there are many disadvantages associated with this approach. First, the memory required to represent the model can quickly expand in the process. Second, the flattening process is computationally expensive. Finally, when modeling a dynamic cellular population within iBioSim, inlining the hierarchy of the model is inefficient since models must grow dynamically over time. This paper discusses a new approach to handle hierarchy on the fly to make the tool faster and more memory-efficient. This approach yields significant performance improvements as compared to the former flat analysis method.

  3. Direct simulation Monte Carlo method with a focal mechanism algorithm

    NASA Astrophysics Data System (ADS)

    Rachman, Asep Nur; Chung, Tae Woong; Yoshimoto, Kazuo; Yun, Sukyoung

    2015-01-01

    To simulate the observation of the radiation pattern of an earthquake, the direct simulation Monte Carlo (DSMC) method is modified by implanting a focal mechanism algorithm. We compare the results of the modified DSMC method (DSMC-2) with those of the original DSMC method (DSMC-1). DSMC-2 shows more or similarly reliable results compared to those of DSMC-1, for events with 12 or more recorded stations, by weighting twice for hypocentral distance of less than 80 km. Not only the number of stations, but also other factors such as rough topography, magnitude of event, and the analysis method influence the reliability of DSMC-2. The most reliable result by DSMC-2 is obtained by the best azimuthal coverage by the largest number of stations. The DSMC-2 method requires shorter time steps and a larger number of particles than those of DSMC-1 to capture a sufficient number of arrived particles in the small-sized receiver.

  4. Experimental signatures of quantum annealing

    NASA Astrophysics Data System (ADS)

    Boixo, Sergio

    2013-03-01

    Quantum annealing is a general strategy for solving optimization problems with the aid of quantum adiabatic evolution. How effective is rapid decoherence in precluding quantum effects in a quantum annealing experiment, and will engineered quantum annealing devices effectively perform classical thermalization when coupled to a decohering thermal environment? Using the D-Wave machine, we report experimental results for a simple problem which takes advantage of the fact that for quantum annealing the measurement statistics are determined by the energy spectrum along the quantum evolution, while in classical thermalization they are determined by the spectrum of the final Hamiltonian only. We establish an experimental signature which is consistent with quantum annealing, and at the same time inconsistent with classical thermalization, in spite of a decoherence timescale which is orders of magnitude shorter than the adiabatic evolution time. For larger and more difficult problems, we compare the measurements statistics of the D-Wave machine to large-scale numerical simulations of simulated annealing and simulated quantum annealing, implemented through classical and quantum Monte Carlo simulations. For our test cases the statistics of the machine are - within calibration uncertainties - indistinguishable from a simulated quantum annealer with suitably chosen parameters, but significantly different from a classical annealer. Work in collaboration with T. Albash, N. Chancellor, S. Isakov, D. Lidar, T. Roennow, F. Spedalieri, M. Troyer and Z. Wang.

  5. Constant-complexity stochastic simulation algorithm with optimal binning

    SciTech Connect

    Sanft, Kevin R.; Othmer, Hans G.

    2015-08-21

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  6. Constant-complexity stochastic simulation algorithm with optimal binning

    NASA Astrophysics Data System (ADS)

    Sanft, Kevin R.; Othmer, Hans G.

    2015-08-01

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie's Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  7. Simulations of high-Tc superconductors using the DCA+ algorithm

    NASA Astrophysics Data System (ADS)

    Staar, Peter

    2015-03-01

    For over three decades, the high Tc-cuprates have been a gigantic challenge for condensed matter theory. Even the simplest representation of these materials, i.e. the single band Hubbard model, is hard to solve quantitatively and its phase-diagram is therefore elusive. In this talk, we present the recent algorithmic and implementation advances to the Dynamical Cluster Approximation (DCA). The algorithmic advances allow us to determine self-consistently a continuous self-energy in momentum space, which in turn reduces the cluster-shape dependency of the superconducting transition temperature and thus accelerates the convergence of the latter versus cluster-size. Furthermore, the introduction of the smooth self-energy suppresses artificial correlations and thus reduces the fermionic sign-problem, allowing us to simulate larger clusters at much lower temperatures. By combining these algorithmic improvements with a very efficient GPU accelerated QMC-solver, we are now able to determine the superconducting transition temperature accurately and show that the Cooper-pairs have indeed a d-wave structure, as was predicted by Zhang and Rice.

  8. Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2008-01-01

    Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.

  9. Annealing Simulations of Nano-Sized Amorphous Structures in SiC

    SciTech Connect

    Gao, Fei; Devanathan, Ram; Zhang, Yanwen; Weber, William J.

    2005-01-01

    A two-dimensional model of a nano-sized amorphous layer embedded in a perfect crystal has been developed, and the amorphous-to-crystalline (a-c) transition in 3C-SiC at 2000 K has been studied using molecular dynamics methods, with simulation times of up to 88 ns. Analysis of the a-c interfaces reveals that the recovery of the bond defects existing at the a-c interfaces plays an important role in recrystallization. During the recrystallization process, a second ordered phase, crystalline 2H-SiC, can be nucleated and grow, and is stable for long simulation times. The crystallization mechanism is a two-step process that is separated by a longer period of second-phase stability. The kink sites formed at the interfaces between 2H- and 3C-SiC provide a low energy path for 2H-SiC atoms to transfer to 3C-SiC atoms, which can be defined as a solid-phase epitaxial transformation (SPET). It is observed that the nano-sized amorphous structure can be fully recrystallized at 2000 K in SiC, which is in agreement with experimental observations.

  10. Verifying Algorithms for Autonomous Aircraft by Simulation Generalities and Example

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2010-01-01

    An open question in Air Traffic Management is what procedures can be validated by simulation where the simulation shows that the probability of undesirable events is below the required level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first examines the probabilistic interpretation of a typical requirement by a regulatory agency and computes the number of trials needed to establish the requirement at an equivalent confidence level. Since any simulation is likely to consider only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. The paper establishes a separation algorithm at the required confidence level where the aircraft operates under feedback control as is subject to perturbations. There is a discussion where it is shown that a scenario three of four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.

  11. Displacement cascades and defects annealing in tungsten, Part I: Defect database from molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Setyawan, Wahyu; Nandipati, Giridhar; Roche, Kenneth J.; Heinisch, Howard L.; Wirth, Brian D.; Kurtz, Richard J.

    2015-07-01

    Molecular dynamics simulations have been used to generate a comprehensive database of surviving defects due to displacement cascades in bulk tungsten. Twenty-one data points of primary knock-on atom (PKA) energies ranging from 100 eV (sub-threshold energy) to 100 keV (∼780 × Ed , where Ed = 128 eV is the average displacement threshold energy) have been completed at 300 K, 1025 K and 2050 K. Within this range of PKA energies, two regimes of power-law energy-dependence of the defect production are observed. A distinct power-law exponent characterizes the number of Frenkel pairs produced within each regime. The two regimes intersect at a transition energy which occurs at approximately 250 × Ed . The transition energy also marks the onset of the formation of large self-interstitial atom (SIA) clusters (size 14 or more). The observed defect clustering behavior is asymmetric, with SIA clustering increasing with temperature, while the vacancy clustering decreases. This asymmetry increases with temperature such that at 2050 K (∼0.5Tm) practically no large vacancy clusters are formed, meanwhile large SIA clusters appear in all simulations. The implication of such asymmetry on the long-term defect survival and damage accumulation is discussed. In addition, <1 0 0>{1 1 0} SIA loops are observed to form directly in the highest energy cascades, while vacancy <1 0 0> loops are observed to form at the lowest temperature and highest PKA energies, although the appearance of both the vacancy and SIA loops with Burgers vector of <1 0 0> type is relatively rare.

  12. Displacement cascades and defects annealing in tungsten, Part I: Defect database from molecular dynamics simulations

    SciTech Connect

    Setyawan, Wahyu; Nandipati, Giridhar; Roche, Kenneth J.; Heinisch, Howard L.; Wirth, Brian D.; Kurtz, Richard J.

    2015-07-01

    Molecular dynamics simulations have been used to generate a comprehensive database of surviving defects due to displacement cascades in bulk tungsten. Twenty-one data points of primary knock-on atom (PKA) energies ranging from 100 eV (sub-threshold energy) to 100 keV (~780×Ed, where Ed = 128 eV is the average displacement threshold energy) have been completed at 300 K, 1025 K and 2050 K. Within this range of PKA energies, two regimes of power-law energy-dependence of the defect production are observed. A distinct power-law exponent characterizes the number of Frenkel pairs produced within each regime. The two regimes intersect at a transition energy which occurs at approximately 250×Ed. The transition energy also marks the onset of the formation of large self-interstitial atom (SIA) clusters (size 14 or more). The observed defect clustering behavior is asymmetric, with SIA clustering increasing with temperature, while the vacancy clustering decreases. This asymmetry increases with temperature such that at 2050 K (~0.5Tm) practically no large vacancy clusters are formed, meanwhile large SIA clusters appear in all simulations. The implication of such asymmetry on the long-term defect survival and damage accumulation is discussed. In addition, <100> {110} SIA loops are observed to form directly in the highest energy cascades, while vacancy <100> loops are observed to form at the lowest temperature and highest PKA energies, although the appearance of both the vacancy and SIA loops with Burgers vector of <100> type is relatively rare.

  13. Optoelectronic analogs of self-programming neural nets - Architecture and methodologies for implementing fast stochastic learning by simulated annealing

    NASA Technical Reports Server (NTRS)

    Farhat, Nabil H.

    1987-01-01

    Self-organization and learning is a distinctive feature of neural nets and processors that sets them apart from conventional approaches to signal processing. It leads to self-programmability which alleviates the problem of programming complexity in artificial neural nets. In this paper architectures for partitioning an optoelectronic analog of a neural net into distinct layers with prescribed interconnectivity pattern to enable stochastic learning by simulated annealing in the context of a Boltzmann machine are presented. Stochastic learning is of interest because of its relevance to the role of noise in biological neural nets. Practical considerations and methodologies for appreciably accelerating stochastic learning in such a multilayered net are described. These include the use of parallel optical computing of the global energy of the net, the use of fast nonvolatile programmable spatial light modulators to realize fast plasticity, optical generation of random number arrays, and an adaptive noisy thresholding scheme that also makes stochastic learning more biologically plausible. The findings reported predict optoelectronic chips that can be used in the realization of optical learning machines.

  14. Optoelectronic analogs of self-programming neural nets: architecture and methodologies for implementing fast stochastic learning by simulated annealing.

    PubMed

    Farhat, N H

    1987-12-01

    Self-organization and learning is a distinctive feature of neural nets and processors that sets them apart from conventional approaches to signal processing. It leads to self-programmability which alleviates the problem of programming complexity in artificial neural nets. In this paper architectures for partitioning an optoelectronic analog of a neural net into distinct layers with prescribed interconnectivity pattern to enable stochastic learning by simulated annealing in the context of a Boltzmann machine are presented. Stochastic learning is of interest because of its relevance to the role of noise in biological neural nets. Practical considerations and methodologies for appreciably accelerating stochastic learning in such a multilayered net are described. These include the use of parallel optical computing of the global energy of the net, the use of fast nonvolatile programmable spatial light modulators to realize fast plasticity, optical generation of random number arrays, and an adaptive noisy thresholding scheme that also makes stochastic learning more biologically plausible. The findings reported predict optoelectronic chips that can be used in the realization of optical learning machines.

  15. The solution conformation of the antibacterial peptide cecropin A: A nuclear magnetic resonance and dynamical simulated annealing study

    SciTech Connect

    Holak, T.A.; Gronenborn, A.M.; Clore, G.M. ); Engstroem, A.; Kraulis, P.J.; Lindeberg, G.; Bennich, H.; Jones, T.A. )

    1988-10-04

    The solution conformation of the antibacterial polypeptide cecropin A from the Cecropia moth is investigated by nuclear magnetic resonance (NMR) spectroscopy under conditions where it adopts a fully ordered structure, as judged by previous circular dichroism studies. By use of a combination of two-dimensional NMR techniques the {sup 1}H NMR spectrum of cecropin A is completely assigned. A set of 243 approximate interproton distance restraints is derived from nuclear Overhauser enhancement (NOE) measurements. These, together with 32 restraints for the 16 intrahelical hydrogen bonds identified on the basis of the pattern of short-range NOEs, form the basis of a three-dimensional structure determination by dynamical simulated annealing. The calculations are carried out starting from three initial structures, an {alpha}-helix, an extended {beta}-strand, and a mixed {alpha}/{beta} structure. Seven independent structures are computed from each starting structure by using a different random number seeds for the assignments of the initial velocities. Analysis of the 21 converged structure indicates that there are two helical regions extending from residues 5 to 21 and from residues 24 to 37 which are very well defined in terms of both atomic root mean square differences and backbone torsion angles. The long axes of the two helices lie in two planes, which are at an angle of 70-100{degree} to each other. The orientation of the helices within these planes, however, cannot be determined due to the paucity of NOEs between the two helices.

  16. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency. PMID:26211074

  17. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    NASA Astrophysics Data System (ADS)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  18. Algorithm design for a gun simulator based on image processing

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Wei, Ping; Ke, Jun

    2015-08-01

    In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.

  19. An algorithm to build mock galaxy catalogues using MICE simulations

    NASA Astrophysics Data System (ADS)

    Carretero, J.; Castander, F. J.; Gaztañaga, E.; Crocce, M.; Fosalba, P.

    2015-02-01

    We present a method to build mock galaxy catalogues starting from a halo catalogue that uses halo occupation distribution (HOD) recipes as well as the subhalo abundance matching (SHAM) technique. Combining both prescriptions we are able to push the absolute magnitude of the resulting catalogue to fainter luminosities than using just the SHAM technique and can interpret our results in terms of the HOD modelling. We optimize the method by populating with galaxies friends-of-friends dark matter haloes extracted from the Marenostrum Institut de Ciències de l'Espai dark matter simulations and comparing them to observational constraints. Our resulting mock galaxy catalogues manage to reproduce the observed local galaxy luminosity function and the colour-magnitude distribution as observed by the Sloan Digital Sky Survey. They also reproduce the observed galaxy clustering properties as a function of luminosity and colour. In order to achieve that, the algorithm also includes scatter in the halo mass-galaxy luminosity relation derived from direct SHAM and a modified Navarro-Frenk-White mass density profile to place satellite galaxies in their host dark matter haloes. Improving on general usage of the HOD that fits the clustering for given magnitude limited samples, our catalogues are constructed to fit observations at all luminosities considered and therefore for any luminosity subsample. Overall, our algorithm is an economic procedure of obtaining galaxy mock catalogues down to faint magnitudes that are necessary to understand and interpret galaxy surveys.

  20. GBT Dynamic Scheduling System: Algorithms, Metrics, and Simulations

    NASA Astrophysics Data System (ADS)

    Balser, D. S.; Bignell, C.; Braatz, J.; Clark, M.; Condon, J.; Harnett, J.; O'Neil, K.; Maddalena, R.; Marganian, P.; McCarty, M.; Sessoms, E.; Shelton, A.

    2009-09-01

    We discuss the scoring algorithm of the Robert C. Byrd Green Bank Telescope (GBT) Dynamic Scheduling System (DSS). Since the GBT is located in a continental, mid-latitude region where weather is dominated by water vapor and small-scale effects, the weather plays an important role in optimizing the observing efficiency of the GBT. We score observing sessions as a product of many factors. Some are continuous functions while others are binary limits taking values of 0 or 1, any one of which can eliminate a candidate session by forcing the score to zero. Others reflect management decisions to expedite observations by visiting observers, ensure the timely completion of projects, etc. Simulations indicate that dynamic scheduling can increase the effective observing time at frequencies higher than 10 GHz by about 50% over one full year. Beta tests of the DSS during Summer 2008 revealed the significance of various scheduling constraints and telescope overhead time to the overall observing efficiency.

  1. Simulating Future GPS Clock Scenarios with Two Composite Clock Algorithms

    NASA Technical Reports Server (NTRS)

    Suess, Matthias; Matsakis, Demetrios; Greenhall, Charles A.

    2010-01-01

    Using the GPS Toolkit, the GPS constellation is simulated using 31 satellites (SV) and a ground network of 17 monitor stations (MS). At every 15-minutes measurement epoch, the monitor stations measure the time signals of all satellites above a parameterized elevation angle. Once a day, the satellite clock estimates the station and satellite clocks. The first composite clock (B) is based on the Brown algorithm, and is now used by GPS. The second one (G) is based on the Greenhall algorithm. The composite clock of G and B performance are investigated using three ground-clock models. Model C simulates the current GPS configuration, in which all stations are equipped with cesium clocks, except for masers at USNO and Alternate Master Clock (AMC) sites. Model M is an improved situation in which every station is equipped with active hydrogen masers. Finally, Models F and O are future scenarios in which the USNO and AMC stations are equipped with fountain clocks instead of masers. Model F is a rubidium fountain, while Model O is more precise but futuristic Optical Fountain. Each model is evaluated using three performance metrics. The timing-related user range error having all satellites available is the first performance index (PI1). The second performance index (PI2) relates to the stability of the broadcast GPS system time itself. The third performance index (PI3) evaluates the stability of the time scales computed by the two composite clocks. A distinction is made between the "Signal-in-Space" accuracy and that available through a GNSS receiver.

  2. Use of simulated annealing for optimization of alignment parameters in limited MRI acquisition volumes of the brain

    SciTech Connect

    Li Xiang; Zhang Pengpeng; Brisman, Ronald; Kutcher, Gerald

    2005-07-15

    Studies suggest that clinical outcomes are improved in repeat trigeminal neuralgia (TN) Gamma Knife radiosurgery if a different part of the nerve from the previous radiosurgery is treated. The MR images taken in the first and repeat radiosurgery need to be coregistered to map the first radiosurgery volume onto the second treatment planning image. We propose a fully automatic and robust three-dimensional (3-D) mutual information- (MI-) based registration method engineered by a simulated annealing (SA) optimization technique. Commonly, Powell's method and Downhill simplex (DS) method are most popular in optimizing the MI objective function in medical image registration applications. However, due to the nonconvex property of the MI function, robustness of those two methods is questionable, especially for our cases, where only 28 slices of MR T1 images were utilized. Our SA method obtained successful registration results for all the 41 patients recruited in this study. On the other hand, Powell's method and the DS method failed to provide satisfactory registration for 11 patients and 9 patients, respectively. The overlapping volume ratio (OVR) is defined to quantify the degree of the partial volume overlap between the first and second MR scan. Statistical results from a logistic regression procedure demonstrated that the probability of a success of Powell's method tends to decrease as OVR decreases. The rigid registration with Powell's or the DS method is not suitable for the TN radiosurgery application, where OVR is likely to be low. In summary, our experimental results demonstrated that the MI-based registration method with the SA optimization technique is a robust and reliable option when the number of slices in the imaging study is limited.

  3. Robotic space simulation integration of vision algorithms into an orbital operations simulation

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.

    1987-01-01

    In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.

  4. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  5. Implementation of low communication frequency 3D FFT algorithm for ultra-large-scale micromagnetics simulation

    NASA Astrophysics Data System (ADS)

    Tsukahara, Hiroshi; Iwano, Kaoru; Mitsumata, Chiharu; Ishikawa, Tadashi; Ono, Kanta

    2016-10-01

    We implement low communication frequency three-dimensional fast Fourier transform algorithms on micromagnetics simulator for calculations of a magnetostatic field which occupies a significant portion of large-scale micromagnetics simulation. This fast Fourier transform algorithm reduces the frequency of all-to-all communications from six to two times. Simulation times with our simulator show high scalability in parallelization, even if we perform the micromagnetics simulation using 32 768 physical computing cores. This low communication frequency fast Fourier transform algorithm enables world largest class micromagnetics simulations to be carried out with over one billion calculation cells.

  6. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

  7. Error correction for encoded quantum annealing

    NASA Astrophysics Data System (ADS)

    Pastawski, Fernando; Preskill, John

    2016-05-01

    Recently, W. Lechner, P. Hauke, and P. Zoller [Sci. Adv. 1, e1500838 (2015), 10.1126/sciadv.1500838] have proposed a quantum annealing architecture, in which a classical spin glass with all-to-all pairwise connectivity is simulated by a spin glass with geometrically local interactions. We interpret this architecture as a classical error-correcting code, which is highly robust against weakly correlated bit-flip noise, and we analyze the code's performance using a belief-propagation decoding algorithm. Our observations may also apply to more general encoding schemes and noise models.

  8. Monte Carlo simulation of dense polymer melts using event chain algorithms.

    PubMed

    Kampmann, Tobias A; Boltz, Horst-Holger; Kierfeld, Jan

    2015-07-28

    We propose an efficient Monte Carlo algorithm for the off-lattice simulation of dense hard sphere polymer melts using cluster moves, called event chains, which allow for a rejection-free treatment of the excluded volume. Event chains also allow for an efficient preparation of initial configurations in polymer melts. We parallelize the event chain Monte Carlo algorithm to further increase simulation speeds and suggest additional local topology-changing moves ("swap" moves) to accelerate equilibration. By comparison with other Monte Carlo and molecular dynamics simulations, we verify that the event chain algorithm reproduces the correct equilibrium behavior of polymer chains in the melt. By comparing intrapolymer diffusion time scales, we show that event chain Monte Carlo algorithms can achieve simulation speeds comparable to optimized molecular dynamics simulations. The event chain Monte Carlo algorithm exhibits Rouse dynamics on short time scales. In the absence of swap moves, we find reptation dynamics on intermediate time scales for long chains.

  9. Laboratory simulations of thermal annealing in proto-planetary discs - II. Crystallization of enstatite from amorphous thin films

    NASA Astrophysics Data System (ADS)

    Droeger, J.; Burchard, M.; Lattard, D.

    2011-12-01

    Amorphous silicates of olivine and pyroxene composition are thought to be common constituents of circumstellar, interstellar, and interplanetary dust. In proto-planetary discs amorphous dust crystallize essentially as a result of thermal annealing. The present project aims at deciphering the kinetics of crystallization pyroxene in proto-planetary dust on the basis of experiments on amorphous thin films. The thin films are deposited on Si-wafers (111) by pulsed laser deposition (PLD). The thin films are completely amorphous, chemically homogeneous (on the MgSiO3 composition) and with a continuous and flat surface. They are subsequently annealed for 1 to 216 h at 1073K and 1098K in a vertical quench furnace and drop-quenched on a copper block. To monitor the progress of crystallization, the samples are characterized by AFM and SEM imaging and IR spectroscopy. After short annealing durations (1 to 12 h) AFM and SE imaging reveal small shallow polygonal features (diameter 0.5-1 μm; height 2-3 nm) evenly distributed at the otherwise flat surface of the thin films. These shallow features are no longer visible after about 3 h at 1098 K, resp. >12 h at 1073 K. Meanwhile, two further types of features appear small protruding pyramids and slightly depressed spherolites. The orders of appearance of these features depend on temperature, but both persist and steadily grow with increasing annealing duration. Their sizes can reach about 12 μm. From TEM investigations on annealed thin films on the Mg2SiO4 composition we know that these features represent crystalline sites, which can be surrounded by a still amorphous matrix (Oehm et al. 2010). A quantitative evaluation of the size of the features will give insights on the progress of crystallization. IR spectra of the unprocessed thin films show only broad bands. In contrast, bands characteristic of crystalline enstatite are clearly recognizable in annealed samples, e.g. after 12 h at 1078 K. Small bands can also be assigned to

  10. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  11. Material growth in thermoelastic continua: Theory, algorithmics, and simulation

    NASA Astrophysics Data System (ADS)

    Vignes, Chet Monroe

    Within the medical community, there has been increasing interest in understanding material growth in biomaterials. Material growth is the capability of a biomaterial to gain or lose mass. This research interest is driven by the host of health implications and medical problems related to this unique biomaterial property. Health providers are keen to understand the role of growth in healing and recovery so that surgical techniques, medical procedures, and physical therapy may be designed and implemented to stimulate healing and minimize recovery time. With this motivation, research seeks to identify and model mechanisms of material growth as well as growth-inducing factors in biomaterials. To this end, a theoretical formulation of stress-induced volumetric material growth in thermoelastic continua is developed. The theory derives, without the classical continuum mechanics assumption of mass conservation, the balance laws governing the mechanics of solids capable of growth. Also, a proposed extension of classical thermodynamic theory provides a foundation for developing general constitutive relations. The theory is consistent in the sense that classical thermoelastic continuum theory is embedded as a special case. Two growth mechanisms, a kinematic and a constitutive contribution, coupled in the most general case of growth, are identified. This identification allows for the commonly employed special cases of density-preserving growth and volume-preserving growth to be easily recovered. In the theory, material growth is regulated by a three-surface activation criterion and corresponding flow rules. A simple model for rate-independent finite growth is proposed based on this formulation. The associated algorithmic implementation, including a method for solving the underlying differential/algebraic equations for growth, is examined in the context of an implicit finite element method. Selected numerical simulations are presented that showcase the predictive capacity of the

  12. Open-System Quantum Annealing in Mean-Field Models with Exponential Degeneracy*

    NASA Astrophysics Data System (ADS)

    Kechedzhi, Kostyantyn; Smelyanskiy, Vadim N.

    2016-04-01

    Real-life quantum computers are inevitably affected by intrinsic noise resulting in dissipative nonunitary dynamics realized by these devices. We consider an open-system quantum annealing algorithm optimized for such a realistic analog quantum device which takes advantage of noise-induced thermalization and relies on incoherent quantum tunneling at finite temperature. We theoretically analyze the performance of this algorithm considering a p -spin model that allows for a mean-field quasiclassical solution and, at the same time, demonstrates the first-order phase transition and exponential degeneracy of states, typical characteristics of spin glasses. We demonstrate that finite-temperature effects introduced by the noise are particularly important for the dynamics in the presence of the exponential degeneracy of metastable states. We determine the optimal regime of the open-system quantum annealing algorithm for this model and find that it can outperform simulated annealing in a range of parameters. Large-scale multiqubit quantum tunneling is instrumental for the quantum speedup in this model, which is possible because of the unusual nonmonotonous temperature dependence of the quantum-tunneling action in this model, where the most efficient transition rate corresponds to zero temperature. This model calculation is the first analytically tractable example where open-system quantum annealing algorithm outperforms simulated annealing, which can, in principle, be realized using an analog quantum computer.

  13. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  14. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-09-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  15. Using Conjoined Rigid Body/Torsion Angle Simulated Annealing to Determine the Relative Orientation of Covalently Linked Protein Domains from Dipolar Couplings

    NASA Astrophysics Data System (ADS)

    Clore, G. Marius; Bewley, Carole A.

    2002-02-01

    A simple and robust method for determining the relative orientations of covalently linked protein domains using conjoined rigid body/torsion angle dynamics simulated annealing on the basis of residual dipolar couplings is presented. In this approach each domain is treated as a rigid body and the relevant degrees of conformational freedom are restricted to the backbone torsion angles (φ, ψ) of the linker between the domains. By this means translational information afforded by the presence of an intact linker is preserved. We illustrate this approach using the domain-swapped dimer of the HIV-inactivating protein cyanovirin-N as an example.

  16. Enhanced quasi-static PIC simulation with pipelining algorithm for e-cloud instability

    NASA Astrophysics Data System (ADS)

    Feng, Bing; Huang, Chengkun; Decyk, Viktor; Mori, Warren; Muggli, Patric; Katsouleas, Tom

    2008-11-01

    Simulating the electron cloud effect on a beam that circulates thousands of turns in circular machines is highly computationally demanding. A novel algorithm, the pipelining algorithm is applied to the fully parallelized quasi-static particle-in-cell code QuickPIC to overcome the limit of the maximum number of processors can be used for each time step. The pipelining algorithm divides the processors into subgroups and each subgroup focuses on different partition of the beam and performs the calculation in series. With this novel algorithm, the accuracy of the simulation is preserved; the speed of the simulation is improved by one order of magnitude with more than 10^2 processors are used. The long term simulation results of the CERN-LHC and the Main Injector at FNAL from the QuickPIC with pipelining algorithm are presented. This work is supported by SiDAC and US Department of Energy

  17. Synchronization Algorithms for Co-Simulation of Power Grid and Communication Networks

    SciTech Connect

    Ciraci, Selim; Daily, Jeffrey A.; Agarwal, Khushbu; Fuller, Jason C.; Marinovici, Laurentiu D.; Fisher, Andrew R.

    2014-09-11

    The ongoing modernization of power grids consists of integrating them with communication networks in order to achieve robust and resilient control of grid operations. To understand the operation of the new smart grid, one approach is to use simulation software. Unfortunately, current power grid simulators at best utilize inadequate approximations to simulate communication networks, if at all. Cooperative simulation of specialized power grid and communication network simulators promises to more accurately reproduce the interactions of real smart grid deployments. However, co-simulation is a challenging problem. A co-simulation must manage the exchange of informa- tion, including the synchronization of simulator clocks, between all simulators while maintaining adequate computational perfor- mance. This paper describes two new conservative algorithms for reducing the overhead of time synchronization, namely Active Set Conservative and Reactive Conservative. We provide a detailed analysis of their performance characteristics with respect to the current state of the art including both conservative and optimistic synchronization algorithms. In addition, we provide guidelines for selecting the appropriate synchronization algorithm based on the requirements of the co-simulation. The newly proposed algorithms are shown to achieve as much as 14% and 63% im- provement, respectively, over the existing conservative algorithm.

  18. Simulated annealing reconstruction and characterization of the three-dimensional microstructure of a LiCoO{sub 2} Lithium-ion battery cathode

    SciTech Connect

    Wu, Wei; Jiang, Fangming

    2013-06-15

    We adapt the simulated annealing approach for reconstruction of the 3D microstructure of a LiCoO{sub 2} cathode from a commercial Li-ion battery. The real size distribution curve of LiCoO{sub 2} particles is applied to regulate the reconstruction process. By discretizing a 40 × 40 × 40 μm cathode volume with 8,000,000 numerical cubes, the cathode involving three individual phases: 1) LiCoO{sub 2} as active material, 2) pores or electrolyte, and 3) additives (polyvinylidene fluoride + carbon black) is reconstructed. The microstructural statistical properties required in the reconstruction process are extracted from 2D focused ion beam/scanning electron microscopy images or obtained by analyzing the powder mixture used to make the cathode. Characterization of the reconstructed cathode gives important structural and transport properties including the two-point correlation functions, volume-specific surface area between phases, tortuosity and geometrical connectivity of individual phase. - Highlights: • Simulated annealing approach is adapted for 3D reconstruction of LiCoO{sub 2} cathode. • Real size distribution of LiCoO{sub 2} particles is applied in reconstruction process. • Reconstructed cathode accords with real one at important statistical properties. • Effective electrode-characterization approaches have been established. • Extensive characterization gives important structural properties, say, tortuosity.

  19. A pixel selection rule based on the number of different-phase neighbours for the simulated annealing reconstruction of sandstone microstructure.

    PubMed

    Tang, T; Teng, Q; He, X; Luo, D

    2009-06-01

    Sandstone reservoir is one of the main types of oil and gas reservoirs in China. It has porous microstructure, which directly affects the transport properties of a sandstone. Hence, the study of porous microstructure is important to the exploration and exploitation of oil and gas. Three-dimensional microstructure of a sandstone can be reconstructed using the simulated annealing method based on statistical properties of its two-dimensional micrograph. The aim of reconstruction is to minimize the discrepancy between the statistical properties of the reconstructed microstructure and those of the two-dimensional image. To accelerate the rate of convergence, we proposed a different-phase neighbours (DPNs)-based pixel selection rule to replace the random pixel selection rule of the simulated annealing reconstruction. In this rule, pixels with the largest number of DPNs have the largest selection probability. The selection probabilities of other pixels are proportional to their DPNs. Microstructure reconstructed with the DPNs-based rule is compared with those with the random selection rule and two other biased pixel selection rules. The DPNs-based rule is the most effective in enhancing convergence. Permeability of the microstructure reconstructed with the DPNs-based rule is estimated by the Kozeny-Carman formula and is in good agreement with the one reconstructed with the random pixel selection rule.

  20. Thermoluminescence curves simulation using genetic algorithm with factorial design

    NASA Astrophysics Data System (ADS)

    Popko, E. A.; Weinstein, I. A.

    2016-05-01

    The evolutionary approach is an effective optimization tool for numeric analysis of thermoluminescence (TL) processes to assess the microparameters of kinetic models and to determine its effects on the shape of TL peaks. In this paper, the procedure for tuning of genetic algorithm (GA) is presented. This approach is based on multifactorial experiment and allows choosing intrinsic mechanisms of evolutionary operators which provide the most efficient algorithm performance. The proposed method is tested by considering the “one trap-one recombination center” (OTOR) model as an example and advantages for approximation of experimental TL curves are shown.

  1. An algorithm for fast DNS cavitating flows simulations using homogeneous mixture approach

    NASA Astrophysics Data System (ADS)

    Žnidarčič, A.; Coutier-Delgosha, O.; Marquillie, M.; Dular, M.

    2015-12-01

    A new algorithm for fast DNS cavitating flows simulations is developed. The algorithm is based on Kim and Moin projection method form. Homogeneous mixture approach with transport equation for vapour volume fraction is used to model cavitation and various cavitation models can be used. Influence matrix and matrix diagonalisation technique enable fast parallel computations.

  2. Cross-comparison of three electromyogram decomposition algorithms assessed with experimental and simulated data.

    PubMed

    Dai, Chenyun; Li, Yejin; Christie, Anita; Bonato, Paolo; McGill, Kevin C; Clancy, Edward A

    2015-01-01

    The reliability of clinical and scientific information provided by algorithms that automatically decompose the electromyogram (EMG) depends on the algorithms' accuracies. We used experimental and simulated data to assess the agreement and accuracy of three publicly available decomposition algorithms-EMGlab (McGill , 2005) (single channel data only), Fuzzy Expert (Erim and Lim, 2008) and Montreal (Florestal , 2009). Data consisted of quadrifilar needle EMGs from the tibialis anterior of 12 subjects at 10%, 20% and 50% maximum voluntary contraction (MVC); single channel needle EMGs from the biceps brachii of 10 controls and 10 patients during contractions just above threshold; and matched simulated data. Performance was assessed via agreement between pairs of algorithms for experimental data and accuracy with respect to the known decomposition for simulated data. For the quadrifilar experimental data, median agreements between the Montreal and Fuzzy Expert algorithms at 10%, 20%, and 50% MVC were 95%, 86%, and 64%, respectively. For the single channel control and patient data, median agreements between the three algorithm pairs were statistically similar at ∼ 97% and ∼ 92%, respectively. Accuracy on the simulated data exceeded this performance. Agreement/accuracy was strongly related to the Decomposability Index (Florestal , 2009). When agreement was high between algorithm pairs applied to simulated data, so was accuracy.

  3. Cross-comparison of three electromyogram decomposition algorithms assessed with experimental and simulated data.

    PubMed

    Dai, Chenyun; Li, Yejin; Christie, Anita; Bonato, Paolo; McGill, Kevin C; Clancy, Edward A

    2015-01-01

    The reliability of clinical and scientific information provided by algorithms that automatically decompose the electromyogram (EMG) depends on the algorithms' accuracies. We used experimental and simulated data to assess the agreement and accuracy of three publicly available decomposition algorithms-EMGlab (McGill , 2005) (single channel data only), Fuzzy Expert (Erim and Lim, 2008) and Montreal (Florestal , 2009). Data consisted of quadrifilar needle EMGs from the tibialis anterior of 12 subjects at 10%, 20% and 50% maximum voluntary contraction (MVC); single channel needle EMGs from the biceps brachii of 10 controls and 10 patients during contractions just above threshold; and matched simulated data. Performance was assessed via agreement between pairs of algorithms for experimental data and accuracy with respect to the known decomposition for simulated data. For the quadrifilar experimental data, median agreements between the Montreal and Fuzzy Expert algorithms at 10%, 20%, and 50% MVC were 95%, 86%, and 64%, respectively. For the single channel control and patient data, median agreements between the three algorithm pairs were statistically similar at ∼ 97% and ∼ 92%, respectively. Accuracy on the simulated data exceeded this performance. Agreement/accuracy was strongly related to the Decomposability Index (Florestal , 2009). When agreement was high between algorithm pairs applied to simulated data, so was accuracy. PMID:24876131

  4. Optical simulation of quantum algorithms using programmable liquid-crystal displays

    SciTech Connect

    Puentes, Graciana; La Mela, Cecilia; Ledesma, Silvia; Iemmi, Claudio; Paz, Juan Pablo; Saraceno, Marcos

    2004-04-01

    We present a scheme to perform an all optical simulation of quantum algorithms and maps. The main components are lenses to efficiently implement the Fourier transform and programmable liquid-crystal displays to introduce space dependent phase changes on a classical optical beam. We show how to simulate Deutsch-Jozsa and Grover's quantum algorithms using essentially the same optical array programmed in two different ways.

  5. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    PubMed Central

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  6. DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM

    EPA Science Inventory

    The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...

  7. A Linac Simulation Code for Macro-Particles Tracking and Steering Algorithm Implementation

    SciTech Connect

    sun, yipeng

    2012-05-03

    In this paper, a linac simulation code written in Fortran90 is presented and several simulation examples are given. This code is optimized to implement linac alignment and steering algorithms, and evaluate the accelerator errors such as RF phase and acceleration gradient, quadrupole and BPM misalignment. It can track a single particle or a bunch of particles through normal linear accelerator elements such as quadrupole, RF cavity, dipole corrector and drift space. One-to-one steering algorithm and a global alignment (steering) algorithm are implemented in this code.

  8. An algorithm for simulation of electrochemical systems with surface-bulk coupling strategies

    NASA Astrophysics Data System (ADS)

    Buoni, Matthew; Petzold, Linda

    2010-01-01

    In Buoni and Petzold (2007) [13] we described a new algorithm for simulation of electrochemical systems on two-dimensional irregular, time-dependent domains. Here we show how to extend the algorithm to three dimensions. We demonstrate our three-dimensional algorithm by simulating copper electrodeposition into a via structure. This problem poses challenges for the coupling of the dilute electrolyte (bulk) model to the surface dynamics model, which involves a complex network of reactions. To handle this coupling, we introduce a new and highly effective semi-implicit method.

  9. Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations

    SciTech Connect

    Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer

    2013-09-01

    Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both

  10. Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation

    NASA Astrophysics Data System (ADS)

    Sakumoto, Yusuke; Ohsaki, Hiroyuki; Imase, Makoto

    In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M+N)log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M+N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(Mα(N)+N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.

  11. Quantum Annealing at Google: Recent Learnings and Next Steps

    NASA Astrophysics Data System (ADS)

    Neven, Hartmut

    Recently we studied optimization problems with rugged energy landscapes that featured tall and narrow energy barriers separating energy minima. We found that for a crafted problem of this kind, called the weak-strong cluster glass, the D-Wave 2X processor achieves a significant advantage in runtime scaling relative to Simulated Annealing (SA). For instances with 945 variables this results in a time-to-99%-success-probability 109 times shorter than SA running on a single core. When comparing to the Quantum Monte Carlo (QMC) algorithm we only observe a pre-factor advantage but the pre-factor is large, about 106 for an implementation on a single core. We should note that we expect QMC to scale like physical quantum annealing only for problems for which the tunneling transitions can be described by a dominant purely imaginary instanton. We expect these findings to carry over to other problems with similar energy landscapes. A class of practical interest are k-th order binary optimization problems. We studied 4-spin problems using numerical methods and found again that simulated quantum annealing has better scaling than SA. This leaves us with a final step to achieve a wall clock speedup of practical relevance. We need to develop an annealing architecture that supports embedding of k-th order binary optimization in a manner that preserves the runtime advantage seen prior to embedding.

  12. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    SciTech Connect

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  13. Laboratory simulations of thermal annealing in proto-planetary discs - I. Crystallization of Mg silicates from sol gels

    NASA Astrophysics Data System (ADS)

    Willenweber, A.; Burchard, M.; Lattard, D.

    2011-12-01

    Amorphous and crystalline dust particles are smallest components of accretionary processes within circumstellar disks. The transition characteristics of amorphous dust to crystalline particles highly influence the thermal structure of the circumstellar disk and therefore affect most other processes. Annealing experiments on different compositions on the MgO-SiO2 join yielded quite contrasting results on the crystallization kinetics of enstatite (Mg2SiO3) and forsterite (Mg2SiO4) (e.g. review of Wooden et al., 2005; Murata et al., 2009; Roskosz et al., 2009). The discrepancies may result from differences in the starting materials. To explore this factor, we have setup several experimental series, using different methods to prepare sol-gels on a variety of Mg/Si ratios. We have also tested different procedures to process the raw gel materials after precipitation. The final gels were annealed in a furnace at temperatures between 700 and 1500 °C for durations between 15min and 96h. MIR and FIR-spectroscopy, x-ray diffraction, BSE imaging and EDX analyses were used to characterize the run products. On enstatite composition the 1500 °C run products consist of well crystallized enstatite polymorphs with very little forsterite. Products of runs between 700 and 800 °C contain both poorly crystallized phases and amorphous material. Between 800 and 780 °C at all run durations crystalline products are dominated by enstatite and show less forsterite. Run products at 750 °C change with run time from amorphous, to forsterite dominated and finally to enstatite dominated mixtures, the latter containing subordinated forsterite. At 700 °C amorphous run products are observed after short run times, but change to forsterite dominated mixtures after longer run times. Up to 24 hours no enstatite could be observed in the products at 700 °C. Preliminary results with the SEM reveal compositional heterogeneities after short run durations (up to 30 minutes), which reflect the formation

  14. A fast algorithm for the simulation of arterial pulse waves

    NASA Astrophysics Data System (ADS)

    Du, Tao; Hu, Dan; Cai, David

    2016-06-01

    One-dimensional models have been widely used in studies of the propagation of blood pulse waves in large arterial trees. Under a periodic driving of the heartbeat, traditional numerical methods, such as the Lax-Wendroff method, are employed to obtain asymptotic periodic solutions at large times. However, these methods are severely constrained by the CFL condition due to large pulse wave speed. In this work, we develop a new numerical algorithm to overcome this constraint. First, we reformulate the model system of pulse wave propagation using a set of Riemann variables and derive a new form of boundary conditions at the inlet, the outlets, and the bifurcation points of the arterial tree. The new form of the boundary conditions enables us to design a convergent iterative method to enforce the boundary conditions. Then, after exchanging the spatial and temporal coordinates of the model system, we apply the Lax-Wendroff method in the exchanged coordinate system, which turns the large pulse wave speed from a liability to a benefit, to solve the wave equation in each artery of the model arterial system. Our numerical studies show that our new algorithm is stable and can perform ∼15 times faster than the traditional implementation of the Lax-Wendroff method under the requirement that the relative numerical error of blood pressure be smaller than one percent, which is much smaller than the modeling error.

  15. Efficient photoheating algorithms in time-dependent photoionization simulations

    NASA Astrophysics Data System (ADS)

    Lee, Kai-Yan; Mellema, Garrelt; Lundqvist, Peter

    2016-02-01

    We present an extension to the time-dependent photoionization code C2-RAY to calculate photoheating in an efficient and accurate way. In C2-RAY, the thermal calculation demands relatively small time-steps for accurate results. We describe two novel methods to reduce the computational cost associated with small time-steps, namely, an adaptive time-step algorithm and an asynchronous evolution approach. The adaptive time-step algorithm determines an optimal time-step for the next computational step. It uses a fast ray-tracing scheme to quickly locate the relevant cells for this determination and only use these cells for the calculation of the time-step. Asynchronous evolution allows different cells to evolve with different time-steps. The asynchronized clocks of the cells are synchronized at the times where outputs are produced. By only evolving cells which may require short time-steps with these short time-steps instead of imposing them to the whole grid, the computational cost of the calculation can be substantially reduced. We show that our methods work well for several cosmologically relevant test problems and validate our results by comparing to the results of another time-dependent photoionization code.

  16. A process-based algorithm for simulating terraces in SWAT

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Terraces in crop fields are one of the most important soil and water conservation measures that affect runoff and erosion processes in a watershed. In large hydrological programs such as the Soil and Water Assessment Tool (SWAT), terrace effects are simulated by adjusting the slope length and the US...

  17. Simulating Multivariate Nonnormal Data Using an Iterative Algorithm

    ERIC Educational Resources Information Center

    Ruscio, John; Kaczetow, Walter

    2008-01-01

    Simulating multivariate nonnormal data with specified correlation matrices is difficult. One especially popular method is Vale and Maurelli's (1983) extension of Fleishman's (1978) polynomial transformation technique to multivariate applications. This requires the specification of distributional moments and the calculation of an intermediate…

  18. Some Algorithms For Simulating Size-resolved Aerosol Dynamics Models

    NASA Astrophysics Data System (ADS)

    Debry, E.; Sportisse, B.

    The objective of this presentation is to show some algorithms used to solve aerosol dynamics in 3D dispersion models. INTRODUCTION The gas phase pollution has been widely studied and some models are now available . The situation is quite different with respect to atmospheric aerosols . However at- mospheric particulate matter significantly influences atmospheric properties such as radiative balance, cloud formation, gas pollutants concentrations ( gas to particle con- version ), and has an impact on man health. As aerosols properties ( optical, hygroscopic, noxiousness ) depend mainly on their size, it appears important to be able to follow the aerosol ( or particle ) size distribution (PSD) during time. This former is modified by physical processes as coagulation, condensation or evaporation, nucleation and removal. Aerosol dynamics is usually modelized by the well-known General Dynamics Equation (GDE) [1]. MODELS Several models already exist to solve this equation. Multi-modal models are widely used [2] [3] because of the few parameters needed, but the GDE is solved only on its moments and the PSD is assumed to remain in a log-normal form. On the contrary, size-resolved models implies a discretization of the aerosol size spec- trum into several bins and to solve the GDE within each one. This step can be per- formed either by resolving each process separately ( splitting ), for example coagula- tion can be resolved by the well-known "size-binning" algorithms [4] and condensa- tion leads to an advection equation on the PSD [5], or by coupling all processes, what the finite elements [6] and stochastic methods [7] allows. Stochastic algorithms may not be competitive compared to deterministic ones with respect to the computation time, but they provide reference solutions useful to validate more operational codes on realistic cases, as analytic solutions of the GDE exist only for academic cases. REFERENCES [1] Seinfeld, J.H. and Pandis,S.N. Atmospheric chemistry and

  19. An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm

    SciTech Connect

    Donev, A; Garcia, A L; Alder, B J

    2007-07-30

    A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.

  20. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with

  1. Determination of three-dimensional structures of proteins from interproton distance data by dynamical simulated annealing from a random array of atoms. Circumventing problems associated with folding.

    PubMed

    Nilges, M; Clore, G M; Gronenborn, A M

    1988-10-24

    A new real space method, based on the principles of simulated annealing, is presented for determining protein structures on the basis of interproton distance restraints derived from NMR data. The method circumvents the folding problem associated with all real space methods described to date, by starting from a completely random array of atoms and introducing the force constants for the covalent, interproton distance and repulsive van der Waals terms in the target function appropriately. The system is simulated at high temperature by solving Newton's equations of motion. As the values of all force constants are very low during the early stages of the simulation, energy barriers between different folds of the protein can be overcome, and the global minimum of the target function is reliably located. Further, because the atoms are initially only weakly coupled, they can move essentially independently to satisfy the restraints. The method is illustrated using two examples of small proteins, namely crambin (46 residues) and potato carboxypeptidase inhibitor (39 residues).

  2. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  3. Rapid optimization of blast wave mitigation strategies using Quiet Direct Simulation and Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Smith, Matthew R.; Kuo, Fang-An; Hsieh, Chih-Wei; Yu, Jen-Perng; Wu, Jong-Shinn; Ferguson, Alex

    2010-06-01

    Presented is a rapid calculation tool for the optimization of blast wave related mitigation strategies. The motion of gas resulting from a blast wave (specified by the user) is solved by the Quiet Direct Simulation (QDS) method - a rapid kinetic theory-based finite volume method. The optimization routine employed is a newly developed Genetic Algorithm (GA) which is demonstrated to be similar to a Differential Evolution (DE) scheme with several modifications. In any Genetic Algorithm, individuals contain genetic information which is passed on to newly created individuals in successive generations. The results from unsteady QDS simulations are used to determine the individual's "genetic fitness" which is employed by the proposed Genetic Algorithm during the reproduction process. The combined QDS/GA algorithm is applied to various test cases and finally the optimization of a non-trivial blast wave mitigation strategy. Both QDS and the proposed GA are demonstrated to perform with minimal computational expense while accurately solving the optimization problems presented.

  4. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    NASA Astrophysics Data System (ADS)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-05-01

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank-Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  5. Parallel algorithms for simulating continuous time Markov chains

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  6. Stochastic algorithms for the analysis of numerical flame simulations

    SciTech Connect

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

    2001-12-14

    Recent progress in simulation methodologies and new, high-performance parallel architectures have made it is possible to perform detailed simulations of multidimensional combustion phenomena using comprehensive kinetics mechanisms. However, as simulation complexity increases, it becomes increasingly difficult to extract detailed quantitative information about the flame from the numerical solution, particularly regarding the details of chemical processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of combustion phenomena. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian viewpoint in which we follow the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system. From this perspective an ''atom'' is part of some molecule that is transported through the domain by advection and diffusion. Reactions ca use the atom to shift from one species to another with the subsequent transport given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion as a suitable random-walk process. Within this probabilistic framework, reactions can be viewed as a Markov process transforming molecule to molecule with given probabilities. In this paper, we discuss the numerical issues in more detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. We also illustrate how the method can be applied to studying the role of cyanochemistry on NOx production in a diffusion flame.

  7. Stochastic algorithms for the analysis of numerical flame simulations

    SciTech Connect

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

    2004-04-26

    Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.

  8. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  9. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.

    PubMed

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-01

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems. PMID:22697525

  10. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    PubMed

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  11. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    PubMed

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  12. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  13. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  14. Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zu, Yun-Xiao; Zhou, Jie

    2012-01-01

    Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate.

  15. A conflict-free, path-level parallelization approach for sequential simulation algorithms

    NASA Astrophysics Data System (ADS)

    Rasera, Luiz Gustavo; Machado, Péricles Lopes; Costa, João Felipe C. L.

    2015-07-01

    Pixel-based simulation algorithms are the most widely used geostatistical technique for characterizing the spatial distribution of natural resources. However, sequential simulation does not scale well for stochastic simulation on very large grids, which are now commonly found in many petroleum, mining, and environmental studies. With the availability of multiple-processor computers, there is an opportunity to develop parallelization schemes for these algorithms to increase their performance and efficiency. Here we present a conflict-free, path-level parallelization strategy for sequential simulation. The method consists of partitioning the simulation grid into a set of groups of nodes and delegating all available processors for simulation of multiple groups of nodes concurrently. An automated classification procedure determines which groups are simulated in parallel according to their spatial arrangement in the simulation grid. The major advantage of this approach is that it does not require conflict resolution operations, and thus allows exact reproduction of results. Besides offering a large performance gain when compared to the traditional serial implementation, the method provides efficient use of computational resources and is generic enough to be adapted to several sequential algorithms.

  16. Confidence range estimate of extended source imagery acquisition algorithms via computer simulations. [in optical communication systems

    NASA Technical Reports Server (NTRS)

    Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret

    1992-01-01

    Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.

  17. Simulating chemical energies to high precision with fully-scalable quantum algorithms on superconducting qubits

    NASA Astrophysics Data System (ADS)

    O'Malley, Peter; Babbush, Ryan; Kivlichan, Ian; Romero, Jhonathan; McClean, Jarrod; Tranter, Andrew; Barends, Rami; Kelly, Julian; Chen, Yu; Chen, Zijun; Jeffrey, Evan; Fowler, Austin; Megrant, Anthony; Mutus, Josh; Neill, Charles; Quintana, Christopher; Roushan, Pedram; Sank, Daniel; Vainsencher, Amit; Wenner, James; White, Theodore; Love, Peter; Aspuru-Guzik, Alan; Neven, Hartmut; Martinis, John

    Quantum simulations of molecules have the potential to calculate industrially-important chemical parameters beyond the reach of classical methods with relatively modest quantum resources. Recent years have seen dramatic progress both superconducting qubits and quantum chemistry algorithms. Here, we present experimental demonstrations of two fully-scalable algorithms for finding the dissociation energy of hydrogen: the variational quantum eigensolver and iterative phase estimation. This represents the first calculation of a dissociation energy to chemical accuracy with a non-precompiled algorithm. These results show the promise of chemistry as the ``killer app'' for quantum computers, even before the advent of full error-correction.

  18. A syncopated leap-frog algorithm for orbit consistent plasma simulation of materials processing reactors

    SciTech Connect

    Cobb, J.W.; Leboeuf, J.N.

    1994-10-01

    The authors present a particle algorithm to extend simulation capabilities for plasma based materials processing reactors. The orbit integrator uses a syncopated leap-frog algorithm in cylindrical coordinates, which maintains second order accuracy, and minimizes computational complexity. Plasma source terms are accumulated orbit consistently directly in the frequency and azimuthal mode domains. Finally they discuss the numerical analysis of this algorithm. Orbit consistency greatly reduces the computational cost for a given level of precision. The computational cost is independent of the degree of time scale separation.

  19. A sweep algorithm for massively parallel simulation of circuit-switched networks

    NASA Technical Reports Server (NTRS)

    Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.

    1992-01-01

    A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.

  20. Multiscale stochastic simulation algorithm with stochastic partial equilibrium assumption for chemically reacting systems

    SciTech Connect

    Cao Yang . E-mail: ycao@cs.ucsb.edu; Gillespie, Dan . E-mail: GillespieDT@mailaps.org; Petzold, Linda . E-mail: petzold@engineering.ucsb.edu

    2005-07-01

    In this paper, we introduce a multiscale stochastic simulation algorithm (MSSA) which makes use of Gillespie's stochastic simulation algorithm (SSA) together with a new stochastic formulation of the partial equilibrium assumption (PEA). This method is much more efficient than SSA alone. It works even with a very small population of fast species. Implementation details are discussed, and an application to the modeling of the heat shock response of E. Coli is presented which demonstrates the excellent efficiency and accuracy obtained with the new method.

  1. Modeling Signal Transduction Networks: A comparison of two Stochastic Kinetic Simulation Algorithms

    SciTech Connect

    Pettigrew, Michel F.; Resat, Haluk

    2005-09-15

    Simulations of a scalable four compartment reaction model based on the well known epidermal growth factor receptor (EGFR) signal transduction system are used to compare two stochastic algorithms ? StochSim and the Gibson-Gillespie. It is concluded that the Gibson-Gillespie is the algorithm of choice for most realistic cases with the possible exception of signal transduction networks characterized by a moderate number (< 100) of complex types, each with a very small population, but with a high degree of connectivity amongst the complex types. Keywords: Signal transduction networks, Stochastic simulation, StochSim, Gillespie

  2. Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms

    SciTech Connect

    Bosl, W J

    2005-01-26

    The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis

  3. Foam flooding reservoir simulation algorithm improvement and application

    NASA Astrophysics Data System (ADS)

    Wang, Yining; Wu, Xiaodong; Wang, Ruihe; Lai, Fengpeng; Zhang, Hanhan

    2014-05-01

    As one of the important enhanced oil recovery (EOR) technologies, Foam flooding is being used more and more widely in the oil field development. In order to describe and predict foam flooding, experts at domestic and abroad have established a number of mathematical models of foam flooding (mechanism, empirical and semi-empirical models). Empirical models require less data and apply conveniently, but the accuracy is not enough. The aggregate equilibrium model can describe foam generation, burst and coalescence by mechanism studying, but it is very difficult to accurately describe. The research considers the effects of critical water saturation, critical concentration of foaming agent and critical oil saturation on the sealing ability of foam and considers the effect of oil saturation on the resistance factor for obtaining the gas phase relative permeability and the results were amended by laboratory test, so the accuracy rate is higher. Through the reservoir development concepts simulation and field practical application, the calculation is more accurate and higher.

  4. Constrained molecular dynamics: Simulations of liquid alkanes with a new algorithm

    NASA Astrophysics Data System (ADS)

    Edberg, Roger; Evans, Denis J.; Morriss, G. P.

    1986-06-01

    We present a new algorithm for molecular dynamics simulation involving holonomic constraints. Constrained equations of motion are derived using Gauss' principle of least constraint. The algorithm uses a fast, exact solution for constraint forces and a new procedure to correct for accumulating numerical errors. We report several simulations of liquid n-butane and n-decane performed with the new algorithm. We obtain an average trans population of 60.6±1.5% in liquid butane at T=291 K and ρ=0.583 g/ml. This result essentially agrees with that from an earlier simulation by Ryckaert and Bellemans [Discuss. Faraday Soc. 66, 95 (1978)]. However, our simulations are substantially more precise; our run lengths are typically ˜20 times longer than those of Ryckaert and Bellemans. Our result also agrees with that from a recent simulation by Wielopolski and Smith (following paper). Thermodynamic and structural data from our simulations also agree well with results from the simulations discussed in the above articles.

  5. Ground return signal simulation and retrieval algorithm of spaceborne integrated path DIAL for CO2 measurements

    NASA Astrophysics Data System (ADS)

    Liu, Bing-Yi; Wang, Jun-Yang; Liu, Zhi-Shen

    2014-11-01

    Spaceborne integrated path differential absorption (IPDA) lidar is an active-detection system which is able to perform global CO2 measurement with high accuracy of 1ppmv at day and night over ground and clouds. To evaluate the detection performance of the system, simulation of the ground return signal and retrieval algorithm for CO2 concentration are presented in this paper. Ground return signals of spaceborne IPDA lidar under various ground surface reflectivity and atmospheric aerosol optical depths are simulated using given system parameters, standard atmosphere profiles and HITRAN database, which can be used as reference for determining system parameters. The simulated signals are further applied to the research on retrieval algorithm for CO2 concentration. The column-weighted dry air mixing ratio of CO2 denoted by XCO2 is obtained. As the deviations of XCO2 between the initial values for simulation and the results from retrieval algorithm are within the expected error ranges, it is proved that the simulation and retrieval algorithm are reliable.

  6. An adaptive multi-level simulation algorithm for stochastic biological systems

    SciTech Connect

    Lester, C. Giles, M. B.; Baker, R. E.; Yates, C. A.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  7. An adaptive multi-level simulation algorithm for stochastic biological systems

    NASA Astrophysics Data System (ADS)

    Lester, C.; Yates, C. A.; Giles, M. B.; Baker, R. E.

    2015-01-01

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  8. Simulation of ammonium and chromium transport in porous media using coupling scheme of a numerical algorithm and a stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Schüttrumpf, Holger; Köngeter, Jürgen; Becker, Torsten; Palani, Sundarambal

    2009-01-01

    The migration of the species of chromium and ammonium in groundwater and their effective remediation depend on the various hydro-geological characteristics of the system. The computational modeling of the reactive transport problems is one of the most preferred tools for field engineers in groundwater studies to make decision in pollution abatement. The analytical models are less modular in nature with low computational demand where the modification is difficult during the formulation of different reactive systems. Numerical models provide more detailed information with high computational demand. Coupling of linear partial differential Equations (PDE) for the transport step with a non-linear system of ordinary differential equations (ODE) for the reactive step is the usual mode of solving a kinetically controlled reactive transport equation. This assumption is not appropriate for a system with low concentration of species such as chromium. Such reaction systems can be simulated using a stochastic algorithm. In this paper, a finite difference scheme coupled with a stochastic algorithm for the simulation of the transport of ammonium and chromium in subsurface media has been detailed.

  9. Quantum algorithms for spin models and simulable gate sets for quantum computation

    NASA Astrophysics Data System (ADS)

    van den Nest, M.; Dür, W.; Raussendorf, R.; Briegel, H. J.

    2009-11-01

    We present simple mappings between classical lattice models and quantum circuits, which provide a systematic formalism to obtain quantum algorithms to approximate partition functions of lattice models in certain complex-parameter regimes. We, e.g., present an efficient quantum algorithm for the six-vertex model as well as a two-dimensional Ising-type model. We show that classically simulating these (complex-parameter) spin models is as hard as simulating universal quantum computation, i.e., BQP complete (BQP denotes bounded-error quantum polynomial time). Furthermore, our mappings provide a framework to obtain efficiently simulable quantum gate sets from exactly solvable classical models. We, e.g., show that the simulability of Valiant’s match gates can be recovered by using the solvability of the free-fermion eight-vertex model.

  10. An optimization method of relativistic backward wave oscillator using particle simulation and genetic algorithms

    SciTech Connect

    Chen, Zaigao; Wang, Jianguo; Wang, Yue; Qiao, Hailiang; Zhang, Dianhui; Guo, Weijie

    2013-11-15

    Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.

  11. A fast sorting algorithm for a hypersonic rarefied flow particle simulation on the connection machine

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1989-01-01

    The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.

  12. Kinetic simulation of fiber amplifier based on parallelizable and bidirectional algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Haihuan; Yang, Huanbi; Wu, Wenhan

    2015-10-01

    The simulation of light waves propagating in fibers oppositely has to handle the extremely huge volume of data when employing sequential and unidirectional methods, where the simulation is in a coordinate system that moves along with the light waves. Therefore, alternative simulation algorithm should be used when calculating counter propagating light waves. Parallelizable and bidirectional (PB) algorithm simulates the light waves matching in time domain instead of space domain, does not need iteration, and permits efficient parallelization on multiple processors. The PB method is proposed to calculate the propagation of dispersing Gaussian pulse and a bit stream in fibers. However, PB method also has apparent advantages when simulating pulses in fiber laser amplifiers, which has not been investigated detailed yet. In this paper, we perform the simulation of pulses in a rare-earth-ions doped fiber amplifier. The influence of pump power, signal power, repetition rate, pulse width and fiber length on the amplifier's output average power, peak power, pulse energy and pulse shape are investigated. The results indicate that the PB method is effective when simulating high power amplification of pulses in fiber amplifier. Furthermore, nonlinear effects can be added into the simulation conveniently. The work in this paper will provide a more economic and efficient method to simulate power amplification of fiber lasers.

  13. Simulation Environment for the Evaluation of 3D Coronary Tree Reconstruction Algorithms in Rotational Angiography

    PubMed Central

    Yang, Guanyu; Bousse, Alexandre; Toumoulin, Christine; Shu, Huazhong

    2007-01-01

    We present a preliminary version of a simulation environment to evaluate the 3D reconstruction algorithms of the coronary arteries in rotational angiography. It includes the construction of a 3D dynamic model of the coronary tree from patient data, the modeling of the rotational angiography acquisition system to simulate different acquisition and gating strategies and the calculation of radiographic projections of the 3D model of coronary tree throughout several cardiac cycles. PMID:18003001

  14. Simulation of a navigator algorithm for a low-cost GPS receiver

    NASA Technical Reports Server (NTRS)

    Hodge, W. F.

    1980-01-01

    The analytical structure of an existing navigator algorithm for a low cost global positioning system receiver is described in detail to facilitate its implementation on in-house digital computers and real-time simulators. The material presented includes a simulation of GPS pseudorange measurements, based on a two-body representation of the NAVSTAR spacecraft orbits, and a four component model of the receiver bias errors. A simpler test for loss of pseudorange measurements due to spacecraft shielding is also noted.

  15. Hierarchical tree algorithm for collisional N-body simulations on GRAPE

    NASA Astrophysics Data System (ADS)

    Fukushige, Toshiyuki; Kawai, Atsushi

    2016-06-01

    We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional N-body simulations, running on the GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such a combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all the particle data and also the tree node data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on the host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct N2 summation in the original Hermite scheme. For example, we can achieve about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of an N = 106 system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme.

  16. Sensitivity of CO2 Simulation in a GCM to the Convective Transport Algorithms

    NASA Technical Reports Server (NTRS)

    Zhu, Z.; Pawson, S.; Collatz, G. J.; Gregg, W. W.; Kawa, S. R.; Baker, D.; Ott, L.

    2014-01-01

    Convection plays an important role in the transport of heat, moisture and trace gases. In this study, we simulated CO2 concentrations with an atmospheric general circulation model (GCM). Three different convective transport algorithms were used. One is a modified Arakawa-Shubert scheme that was native to the GCM; two others used in two off-line chemical transport models (CTMs) were added to the GCM here for comparison purposes. Advanced CO2 surfaced fluxes were used for the simulations. The results were compared to a large quantity of CO2 observation data. We find that the simulation results are sensitive to the convective transport algorithms. Overall, the three simulations are quite realistic and similar to each other in the remote marine regions, but are significantly different in some land regions with strong fluxes such as Amazon and Siberia during the convection seasons. Large biases against CO2 measurements are found in these regions in the control run, which uses the original GCM. The simulation with the simple diffusive algorithm is better. The difference of the two simulations is related to the very different convective transport speed.

  17. Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm

    SciTech Connect

    Thanh, Vo Hong; Priami, Corrado

    2015-08-07

    We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reaction rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.

  18. Efficient parallel algorithm for statistical ion track simulations in crystalline materials

    NASA Astrophysics Data System (ADS)

    Jeon, Byoungseon; Grønbech-Jensen, Niels

    2009-02-01

    We present an efficient parallel algorithm for statistical Molecular Dynamics simulations of ion tracks in solids. The method is based on the Rare Event Enhanced Domain following Molecular Dynamics (REED-MD) algorithm, which has been successfully applied to studies of, e.g., ion implantation into crystalline semiconductor wafers. We discuss the strategies for parallelizing the method, and we settle on a host-client type polling scheme in which a multiple of asynchronous processors are continuously fed to the host, which, in turn, distributes the resulting feed-back information to the clients. This real-time feed-back consists of, e.g., cumulative damage information or statistics updates necessary for the cloning in the rare event algorithm. We finally demonstrate the algorithm for radiation effects in a nuclear oxide fuel, and we show the balanced parallel approach with high parallel efficiency in multiple processor configurations.

  19. SIMULATION OF AEROSOL DYNAMICS: A COMPARATIVE REVIEW OF ALGORITHMS USED IN AIR QUALITY MODELS

    EPA Science Inventory

    A comparative review of algorithms currently used in air quality models to simulate aerosol dynamics is presented. This review addresses coagulation, condensational growth, nucleation, and gas/particle mass transfer. Two major approaches are used in air quality models to repres...

  20. Evaluation of effective-stress-function algorithm for nuclear fuel simulation

    SciTech Connect

    Kim, H. C.; Yang, Y. S.; Koo, Y. H.

    2013-07-01

    In a pressurized water reactor (PWR), the mechanical integrity of nuclear fuel is the most critical issue as it is an important barrier for fission products released into the environment. The integrity of zirconium cladding that surrounds uranium oxide can be threatened during off-normal operation owing to a pellet-cladding mechanical interaction (PCMI). To analyze the fuel and cladding behavior during off-operation, the fuel performance code should calculate an inelastic analysis in two - or three-dimensional calculations. In this paper, the effective stress function (ESF) algorithm based on a two-dimensional FE module has been implemented to simulate the inelastic behavior of the cladding with stability and accuracy. The ESF algorithm solves the governing equations of the inelastic constitutive behavior by calculating the zero of the appropriate effective-stress-function. To verify the accuracy of the ESF algorithm for an inelastic analysis, a code-to-code benchmark was performed using the commercial FE code, ANSYS 13.0. To demonstrate the stability and convergence of the implemented algorithm, the number of iterations in the ESF algorithm was compared with that in a sequential algorithm in the case of an inelastic problem. Consequently, the evaluation results demonstrate that the implemented ESF algorithm improves the efficiency of the computation without a loss of accuracy for an inelastic analysis. (authors)

  1. An Algorithm for Interactive Modeling of Space-Transportation Engine Simulations: A Constraint Satisfaction Approach

    NASA Technical Reports Server (NTRS)

    Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara

    2001-01-01

    In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.

  2. Molecular dynamics simulations of the adhesion of a thin annealed film of oleic acid onto crystalline cellulose.

    PubMed

    Quddus, Mir A A R; Rojas, Orlando J; Pasquinelli, Melissa A

    2014-04-14

    Molecular dynamics simulations were used to characterize the wetting behavior of crystalline cellulose planes in contact with a thin oily film of oleic acid. Cellulose crystal planes with higher molecular protrusions and increased surface area produced stronger adhesion if compared to other crystal planes due to enhanced wetting and hydrogen bonding. The detailed characteristics of crystal plane features and the contribution of directional hydrogen bonding was investigated. Similarly, oleophilicity of the cellulose planes increased with the increase in surface roughness and number of directional hydrogen bonds. These results correlate with conclusions drawn from experimental studies such as adhesion of an ink vehicle on cellulose surface.

  3. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  4. The scattering simulation of DSDs and the polarimetric radar rainfall algorithms at C-band frequency

    NASA Astrophysics Data System (ADS)

    Islam, Tanvir

    2014-11-01

    This study explores polarimetric radar rainfall algorithms at C-band frequency using a total of 162,415 1-min raindrop spectra from an extensive disdrometer dataset. Five different raindrop shape models have been tested to simulate polarimetric radar variables-the reflectivity factor (Z), differential reflectivity (Zdr) and specific differential phase (Kdp), through the T-matrix microwave scattering approach. The polarimetric radar rainfall algorithms are developed in the form of R(Z), R(Kdp), R(Z, Zdr) and R(Zdr, Kdp) combinations. Based on the best fitted raindrop spectra models rain rate retrieval information using disdrometer derived rain rate as a reference, the algorithms are further explored in view of stratiform and convective rain regimes. Finally, an “artificial” algorithm is proposed which considers the developed algorithms for stratiform and convective regimes and uses R(Z), R(Kdp) and R(Z, Zdr) in different scenarios. The artificial algorithm is applied to and evaluated by the Thurnham C-band dual polarized radar data in 6 storm cases perceiving the rationalization in terms of rainfall retrieval accuracy as compared to the operational Marshall-Palmer algorithm (Z=200R1.6). A dense network of 73 tipping bucket rain gauges is employed for the evaluation, and the result demonstrates that the artificial algorithm outperforms the Marshall-Palmer algorithm showing R2=0.84 and MAE=0.82 mm as opposed to R2=0.79 and MAE=0.86 mm respectively.

  5. Quantum algorithm for simulating the dynamics of an open quantum system

    SciTech Connect

    Wang Hefeng; Ashhab, S.; Nori, Franco

    2011-06-15

    In the study of open quantum systems, one typically obtains the decoherence dynamics by solving a master equation. The master equation is derived using knowledge of some basic properties of the system, the environment, and their interaction: One basically needs to know the operators through which the system couples to the environment and the spectral density of the environment. For a large system, it could become prohibitively difficult to even write down the appropriate master equation, let alone solve it on a classical computer. In this paper, we present a quantum algorithm for simulating the dynamics of an open quantum system. On a quantum computer, the environment can be simulated using ancilla qubits with properly chosen single-qubit frequencies and with properly designed coupling to the system qubits. The parameters used in the simulation are easily derived from the parameters of the system + environment Hamiltonian. The algorithm is designed to simulate Markovian dynamics, but it can also be used to simulate non-Markovian dynamics provided that this dynamics can be obtained by embedding the system of interest into a larger system that obeys Markovian dynamics. We estimate the resource requirements for the algorithm. In particular, we show that for sufficiently slow decoherence a single ancilla qubit could be sufficient to represent the entire environment, in principle.

  6. Determination of three-dimensional structures of proteins from interproton distance data by hybrid distance geometry-dynamical simulated annealing calculations.

    PubMed

    Nilges, M; Clore, G M; Gronenborn, A M

    1988-03-14

    A new hybrid distance space-real space method for determining three-dimensional structures of proteins on the basis of interproton distance restraints is presented. It involves the following steps: (i) the approximate polypeptide fold is obtained by generating a set of substructures comprising only a small subset of atoms by projection from multi-dimensional distance space into three-dimensional cartesian coordinate space using a procedure known as 'embedding'; (ii) all remaining atoms are then added by best fitting extended amino acids one residue at a time to the substructures; (iii) the resulting structures are used as the starting point for real space dynamical simulated annealing calculations. The latter involve heating the system to a high temperature followed by slow cooling in order to overcome potential barriers along the pathway towards the global minimum region. This is carried out by solving Newton's equations of motion. Unlike conventional restrained molecular dynamics, however, the non-bonded interactions are represented by a simple van der Waals repulsion term. The method is illustrated by calculations on crambin (46 residues) and the globular domain of histone H5 (79 residues). It is shown that the hybrid method is more efficient computationally and samples a larger region of conformational space consistent with the experimental data than full metric matrix distance geometry calculations alone, particularly for large systems.

  7. Simulated annealing with restrained molecular dynamics using CONGEN: energy refinement of the NMR solution structures of epidermal and type-alpha transforming growth factors.

    PubMed Central

    Tejero, R.; Bassolino-Klimas, D.; Bruccoleri, R. E.; Montelione, G. T.

    1996-01-01

    The new functionality of the program CONGEN (Bruccoleri RE, Karplus M, 1987, Biopolymers 26:137-168; Bassolino-Klimas D et al., 1996, Protein Sci 5:593-603) has been applied for energy refinement of two previously determined solution NMR structures, murine epidermal growth factor (mEGF) and human type-alpha transforming growth factor (hTGF alpha). A summary of considerations used in converting experimental NMR data into distance constraints for CONGEN is presented. A general protocol for simulated annealing with restrained molecular dynamics is applied to generate NMR solution structures using CONGEN together with real experimental NMR data. A total of 730 NMR-derived constraints for mEGF and 424 NMR-derived constraints for hTGF alpha were used in these energy-refinement calculations. Different weighting schemes and starting conformations were studied to check and/or improve the sampling of the low-energy conformational space that is consistent with all constraints. The results demonstrate that loosened (i.e., "relaxed") sets of the EGF and hTGF alpha internuclear distance constraints allow molecules to overcome local minima in the search for a global minimum with respect to both distance restraints and conformational energy. The resulting energy-refined structures of mEGF and hTGF alpha are compared with structures determined previously and with structures of homologous proteins determined by NMR and X-ray crystallography. PMID:8845748

  8. Improvement of bio-corrosion resistance for Ti42Zr40Si15Ta3 metallic glasses in simulated body fluid by annealing within supercooled liquid region.

    PubMed

    Huang, C H; Lai, J J; Wei, T Y; Chen, Y H; Wang, X; Kuan, S Y; Huang, J C

    2015-01-01

    The effects of the nanocrystalline phases on the bio-corrosion behavior of highly bio-friendly Ti42Zr40Si15Ta3 metallic glasses in simulated body fluid were investigated, and the findings are compared with our previous observations from the Zr53Cu30Ni9Al8 metallic glasses. The Ti42Zr40Si15Ta3 metallic glasses were annealed at temperatures above the glass transition temperature, Tg, with different time periods to result in different degrees of α-Ti nano-phases in the amorphous matrix. The nanocrystallized Ti42Zr40Si15Ta3 metallic glasses containing corrosion resistant α-Ti phases exhibited more promising bio-corrosion resistance, due to the superior pitting resistance. This is distinctly different from the previous case of the Zr53Cu30Ni9Al8 metallic glasses with the reactive Zr2Cu phases inducing serious galvanic corrosion and lower bio-corrosion resistance. Thus, whether the fully amorphous or partially crystallized metallic glass would exhibit better bio-corrosion resistance, the answer would depend on the crystallized phase nature.

  9. Algorithm for Building a Spectrum for NREL's One-Sun Multi-Source Simulator: Preprint

    SciTech Connect

    Moriarty, T.; Emery, K.; Jablonski, J.

    2012-06-01

    Historically, the tools used at NREL to compensate for the difference between a reference spectrum and a simulator spectrum have been well-matched reference cells and the application of a calculated spectral mismatch correction factor, M. This paper describes the algorithm for adjusting the spectrum of a 9-channel fiber-optic-based solar simulator with a uniform beam size of 9 cm square at 1-sun. The combination of this algorithm and the One-Sun Multi-Source Simulator (OSMSS) hardware reduces NREL's current vs. voltage measurement time for a typical three-junction device from man-days to man-minutes. These time savings may be significantly greater for devices with more junctions.

  10. Space-based Doppler lidar sampling strategies: Algorithm development and simulated observation experiments

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D.; Wood, S. A.; Morris, M.

    1990-01-01

    Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.

  11. A parallel finite volume algorithm for large-eddy simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Bui, Trong Tri

    1998-11-01

    A parallel unstructured finite volume algorithm is developed for large-eddy simulation of compressible turbulent flows. Major components of the algorithm include piecewise linear least-square reconstruction of the unknown variables, trilinear finite element interpolation for the spatial coordinates, Roe flux difference splitting, and second-order MacCormack explicit time marching. The computer code is designed from the start to take full advantage of the additional computational capability provided by the current parallel computer systems. Parallel implementation is done using the message passing programming model and message passing libraries such as the Parallel Virtual Machine (PVM) and Message Passing Interface (MPI). The development of the numerical algorithm is presented in detail. The parallel strategy and issues regarding the implementation of a flow simulation code on the current generation of parallel machines are discussed. The results from parallel performance studies show that the algorithm is well suited for parallel computer systems that use the message passing programming model. Nearly perfect parallel speedup is obtained on MPP systems such as the Cray T3D and IBM SP2. Performance comparison with the older supercomputer systems such as the Cray YMP show that the simulations done on the parallel systems are approximately 10 to 30 times faster. The results of the accuracy and performance studies for the current algorithm are reported. To validate the flow simulation code, a number of Euler and Navier-Stokes simulations are done for internal duct flows. Inviscid Euler simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle shows that the algorithm is capable of simultaneously tracking the very small disturbances of the acoustic wave and capturing the shock wave. Navier-Stokes simulations are made for fully developed laminar flow in a square duct, developing laminar flow in a

  12. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    PubMed Central

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  13. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  14. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  15. Simulation of Water-Entry and Water-Exit Problems Using a Moving Mesh Algorithm

    NASA Astrophysics Data System (ADS)

    Panahi, Roozbeh

    2012-06-01

    Simulation of the water-entry and water-exit particularly, at the interface of two phases i.e. water and air due to the effect of flow-induced loads, gravity force and trapped air cushion presence is very complicated. This paper attempts to introduce a finite volume-based moving mesh algorithm in order to simulate such problems in a viscous incompressible two-phase medium. The algorithm employs a fractional step method to deal with the coupling between pressure and velocity fields. Interface is also captured by solving a volume fraction transport equation. A boundary-fitted body-attached mesh of quadrilateral Control Volumes (CVs) is implemented to record hydrodynamic time histories of loads, motions and interfacial flow changes around the structure. Forced water-exit of a cylinder is simulated based on the introduced algorithm, together with free symmetric and asymmetric water-entry of wedges. Results show that the presented algorithm is favorably capable of assessing such complexities when comparing to experimental data.

  16. Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS

    NASA Astrophysics Data System (ADS)

    Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.

    Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.

  17. Algorithm for loading shot noise microbunching in multi-dimensional, free-electron laser simulation codes

    SciTech Connect

    Fawley, William M.

    2002-03-25

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  18. Algorithm for loading shot noise microbunching in multidimensional, free-electron laser simulation codes

    NASA Astrophysics Data System (ADS)

    Fawley, William M.

    2002-07-01

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multidimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  19. A general concurrent algorithm for plasma particle-in-cell simulation codes

    NASA Technical Reports Server (NTRS)

    Liewer, Paulett C.; Decyk, Viktor K.

    1989-01-01

    The general concurrent particle-in-cell (GCPIC) algorithm has been used to implement an electrostatic particle-in-cell code on a 32-node hypercube parallel computer. The GCPIC algorithm decomposes the PIC code by dividing the particle simulation physical domain into subdomains that are equal in number to the number of processors; all subdomains will accordingly possess approximately equal numbers of particles. The portion of the code which updates particle positions and velocities is nearly 100 percent efficient when the number of particles increases linearly with that of hypercube processors.

  20. DSMC moving-boundary algorithms for simulating MEMS geometries with opening and closing gaps.

    SciTech Connect

    Gallis, Michail A.; Rader, Daniel John; Torczynski, John Robert

    2010-06-01

    Moving-boundary algorithms for the Direct Simulation Monte Carlo (DSMC) method are investigated for a microbeam that moves toward and away from a parallel substrate. The simpler but analogous one-dimensional situation of a piston moving between two parallel walls is investigated using two moving-boundary algorithms. In the first, molecules are reflected rigorously from the moving piston by performing the reflections in the piston frame of reference. In the second, molecules are reflected approximately from the moving piston by moving the piston and subsequently moving all molecules and reflecting them from the moving piston at its new or old position.

  1. The small-voxel tracking algorithm for simulating chemical reactions among diffusing molecules

    SciTech Connect

    Gillespie, Daniel T. Gillespie, Carol A.; Seitaridou, Effrosyni

    2014-12-21

    Simulating the evolution of a chemically reacting system using the bimolecular propensity function, as is done by the stochastic simulation algorithm and its reaction-diffusion extension, entails making statistically inspired guesses as to where the reactant molecules are at any given time. Those guesses will be physically justified if the system is dilute and well-mixed in the reactant molecules. Otherwise, an accurate simulation will require the extra effort and expense of keeping track of the positions of the reactant molecules as the system evolves. One molecule-tracking algorithm that pays careful attention to the physics of molecular diffusion is the enhanced Green's function reaction dynamics (eGFRD) of Takahashi, Tănase-Nicola, and ten Wolde [Proc. Natl. Acad. Sci. U.S.A. 107, 2473 (2010)]. We introduce here a molecule-tracking algorithm that has the same theoretical underpinnings and strategic aims as eGFRD, but a different implementation procedure. Called the small-voxel tracking algorithm (SVTA), it combines the well known voxel-hopping method for simulating molecular diffusion with a novel procedure for rectifying the unphysical predictions of the diffusion equation on the small spatiotemporal scale of molecular collisions. Indications are that the SVTA might be more computationally efficient than eGFRD for the problematic class of non-dilute systems. A widely applicable, user-friendly software implementation of the SVTA has yet to be developed, but we exhibit some simple examples which show that the algorithm is computationally feasible and gives plausible results.

  2. Simulated tempering based on global balance or detailed balance conditions: Suwa-Todo, heat bath, and Metropolis algorithms.

    PubMed

    Mori, Yoshiharu; Okumura, Hisashi

    2015-12-01

    Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm.

  3. A space time-ensemble parallel nudged elastic band algorithm for molecular kinetics simulation

    NASA Astrophysics Data System (ADS)

    Nakano, Aiichiro

    2008-02-01

    A scalable parallel algorithm has been designed to study long-time dynamics of many-atom systems based on the nudged elastic band method, which performs mutually constrained molecular dynamics simulations for a sequence of atomic configurations (or states) to obtain a minimum energy path between initial and final local minimum-energy states. A directionally heated nudged elastic band method is introduced to search for thermally activated events without the knowledge of final states, which is then applied to an ensemble of bands in a path ensemble method for long-time simulation in the framework of the transition state theory. The resulting molecular kinetics (MK) simulation method is parallelized with a space-time-ensemble parallel nudged elastic band (STEP-NEB) algorithm, which employs spatial decomposition within each state, while temporal parallelism across the states within each band and band-ensemble parallelism are implemented using a hierarchy of communicator constructs in the Message Passing Interface library. The STEP-NEB algorithm exhibits good scalability with respect to spatial, temporal and ensemble decompositions on massively parallel computers. The MK simulation method is used to study low strain-rate deformation of amorphous silica.

  4. Algorithm for simulation of quantum many-body dynamics using dynamical coarse-graining

    SciTech Connect

    Khasin, M.; Kosloff, R.

    2010-04-15

    An algorithm for simulation of quantum many-body dynamics having su(2) spectrum-generating algebra is developed. The algorithm is based on the idea of dynamical coarse-graining. The original unitary dynamics of the target observables--the elements of the spectrum-generating algebra--is simulated by a surrogate open-system dynamics, which can be interpreted as weak measurement of the target observables, performed on the evolving system. The open-system state can be represented by a mixture of pure states, localized in the phase space. The localization reduces the scaling of the computational resources with the Hilbert-space dimension n by factor n{sup 3/2}(ln n){sup -1} compared to conventional sparse-matrix methods. The guidelines for the choice of parameters for the simulation are presented and the scaling of the computational resources with the Hilbert-space dimension of the system is estimated. The algorithm is applied to the simulation of the dynamics of systems of 2x10{sup 4} and 2x10{sup 6} cold atoms in a double-well trap, described by the two-site Bose-Hubbard model.

  5. A comparison of various algorithms to extract Magic Formula tyre model coefficients for vehicle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.

    2015-02-01

    Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.

  6. An order (n) algorithm for the dynamics simulation of robotic systems

    NASA Technical Reports Server (NTRS)

    Chun, H. M.; Turner, J. D.; Frisch, Harold P.

    1989-01-01

    The formulation of an Order (n) algorithm for DISCOS (Dynamics Interaction Simulation of Controls and Structures), which is an industry-standard software package for simulation and analysis of flexible multibody systems is presented. For systems involving many bodies, the new Order (n) version of DISCOS is much faster than the current version. Results of the experimental validation of the dynamics software are also presented. The experiment is carried out on a seven-joint robot arm at NASA's Goddard Space Flight Center. The algorithm used in the current version of DISCOS requires the inverse of a matrix whose dimension is equal to the number of constraints in the system. Generally, the number of constraints in a system is roughly proportional to the number of bodies in the system, and matrix inversion requires O(p exp 3) operations, where p is the dimension of the matrix. The current version of DISCOS is therefore considered an Order (n exp 3) algorithm. In contrast, the Order (n) algorithm requires inversion of matrices which are small, and the number of matrices to be inverted increases only linearly with the number of bodies. The newly-developed Order (n) DISCOS is currently capable of handling chain and tree topologies as well as multiple closed loops. Continuing development will extend the capability of the software to deal with typical robotics applications such as put-and-place, multi-arm hand-off and surface sliding.

  7. Identification of Clathrate Hydrates, Hexagonal Ice, Cubic Ice, and Liquid Water in Simulations: the CHILL+ Algorithm.

    PubMed

    Nguyen, Andrew H; Molinero, Valeria

    2015-07-23

    Clathrate hydrates and ice I are the most abundant crystals of water. The study of their nucleation, growth, and decomposition using molecular simulations requires an accurate and efficient algorithm that distinguishes water molecules that belong to each of these crystals and the liquid phase. Existing algorithms identify ice or clathrates, but not both. This poses a challenge for cases in which ice and hydrate coexist, such as in the synthesis of clathrates from ice and the formation of ice from clathrates during self-preservation of methane hydrates. Here we present an efficient algorithm for the identification of clathrate hydrates, hexagonal ice, cubic ice, and liquid water in molecular simulations. CHILL+ uses the number of staggered and eclipsed water-water bonds to identify water molecules in cubic ice, hexagonal ice, and clathrate hydrate. CHILL+ is an extension of CHILL (Moore et al. Phys. Chem. Chem. Phys. 2010, 12, 4124-4134), which identifies hexagonal and cubic ice but not clathrates. In addition to the identification of hydrates, CHILL+ significantly improves the detection of hexagonal ice up to its melting point. We validate the use of CHILL+ for the identification of stacking faults in ice and the nucleation and growth of clathrate hydrates. To our knowledge, this is the first algorithm that allows for the simultaneous identification of ice and clathrate hydrates, and it does so in a way that is competitive with respect to existing methods used to identify any of these crystals. PMID:25389702

  8. A pencil beam algorithm for intensity modulated proton therapy derived from Monte Carlo simulations.

    PubMed

    Soukup, Martin; Fippel, Matthias; Alber, Markus

    2005-11-01

    A pencil beam algorithm as a component of an optimization algorithm for intensity modulated proton therapy (IMPT) is presented. The pencil beam algorithm is tuned to the special accuracy requirements of IMPT, where in heterogeneous geometries both the position and distortion of the Bragg peak and the lateral scatter pose problems which are amplified by the spot weight optimization. Heterogeneity corrections are implemented by a multiple raytracing approach using fluence-weighted sub-spots. In order to derive nuclear interaction corrections, Monte Carlo simulations were performed. The contribution of long ranged products of nuclear interactions is taken into account by a fit to the Monte Carlo results. Energy-dependent stopping power ratios are also implemented. Scatter in optional beam line accessories such as range shifters or ripple filters is taken into account. The collimator can also be included, but without additional scattering. Finally, dose distributions are benchmarked against Monte Carlo simulations, showing 3%/1 mm agreement for simple heterogeneous phantoms. In the case of more complicated phantoms, principal shortcomings of pencil beam algorithms are evident. The influence of these effects on IMPT dose distributions is shown in clinical examples. PMID:16237243

  9. Optimized simulations of Olami-Feder-Christensen systems using parallel algorithms

    NASA Astrophysics Data System (ADS)

    Dominguez, Rachele; Necaise, Rance; Montag, Eric

    The sequential nature of the Olami-Feder-Christensen (OFC) model for earthquake simulations limits the benefits of parallel computing approaches because of the frequent communication required between processors. We developed a parallel version of the OFC algorithm for multi-core processors. Our data, even for relatively small system sizes and low numbers of processors, indicates that increasing the number of processors provides significantly faster simulations; producing more efficient results than previous attempts that used network-based Beowulf clusters. Our algorithm optimizes performance by exploiting the multi-core processor architecture, minimizing communication time in contrast to the networked Beowulf-cluster approaches. Our multi-core algorithm is the basis for a new algorithm using GPUs that will drastically increase the number of processors available. Previous studies incorporating realistic structural features of faults into OFC models have revealed spatial and temporal patterns observed in real earthquake systems. The computational advances presented here will allow for studying interacting networks of faults, rather than individual faults, further enhancing our understanding of the relationship between the earth's structure and the triggering process. Support for this project comes from the Chenery Research Fund, the Rashkind Family Endowment, the Walter Williams Craigie Teaching Endowment, and the Schapiro Undergraduate Research Fellowship.

  10. The Separatrix Algorithm for Synthesis and Analysis of Stochastic Simulations with Applications in Disease Modeling

    PubMed Central

    Klein, Daniel J.; Baym, Michael; Eckhoff, Philip

    2014-01-01

    Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by ), stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which “success” is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%). Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria. PMID:25078087

  11. Application and evaluation of two nutrient algorithms of Hydrological Simulation Program Fortran in Wolf River watershed.

    PubMed

    Liu, Zhijun; Kingery, William L; Huddleston, David H; Hossain, Faisal; Hashim, Noor B; Kieffer, Janna M

    2008-06-01

    This study performs a comparison of two nutrient algorithms of Hydrological Simulation Program Fortran, PQUAL/IQUAL and AGCHEM. Watershed nutrient models with, PQUAL/IQUAL and AGCHEM, were developed and calibrated separately with observed data in the Wolf River watershed. Compared to AGCHEM modules, the PQUAL/IQUAL algorithm was found to have several disadvantages. Examples are: (i) it is a simple loading estimation algorithm, and cannot represent the soil nutrient processes; and (ii) the interactions of modeled nutrient species in the soil cannot be simulated. The AGCHEM modules are capable of explicitly representing the comprehensive nutrient processes in the soil such as fertilization, atmospheric deposition, manure application, plant uptake process, and the transformation processes. Therefore, AGCHEM modules afford the ability to evaluate the alternative management practice and model the interactions between nutrient species. However, our modeling results indicated that the inclusion of AGCHEM modules do not significantly improve the nutrient modeling performance but rather take much more time in model development. The nutrient algorithms selection for total maximum daily loads development depends on the data availability, required modeling accuracy, and available time for model development.

  12. Stamping Line Optimization Using Genetic Algorithms and Virtual 3D Line Simulation

    NASA Astrophysics Data System (ADS)

    García-Sedano, Javier A.; Bernardo, Jon Alzola; González, Asier González; de Gauna, Óscar Berasategui Ruiz; de Mendivil, Rafael Yuguero González

    This paper describes the use of a genetic algorithm (GA) in order to optimize the trajectory followed by industrial robots (IRs) in stamping lines. The objective is to generate valid paths or trajectories without collisions in order to minimize the cycle time required to complete all the operations in an individual stamping cell of the line. A commercial software tool is used to simulate the virtual trajectories and potential collisions, taking into account the specific geometries of the different parts involved: robot arms, columns, dies and manipulators. Then, a genetic algorithm is proposed to optimize trajectories. Both systems, the GA and the simulator, communicate as client - server in order to evaluate solutions proposed by the GA. The novelty of the idea is to consider the geometry of the specific components to adjust robot paths to optimize cycle time in a given stamping cell.

  13. An improved real-time endovascular guidewire position simulation using shortest path algorithm.

    PubMed

    Qiu, Jianpeng; Qu, Zhiyi; Qiu, Haiquan; Zhang, Xiaomin

    2016-09-01

    In this study, we propose a new graph-theoretical method to simulate guidewire paths inside the carotid artery. The minimum energy guidewire path can be obtained by applying the shortest path algorithm, such as Dijkstra's algorithm for graphs, based on the principle of the minimal total energy. Compared to previous results, experiments of three phantoms were validated, revealing that the first and second phantoms overlap completely between simulated and real guidewires. In addition, 95 % of the third phantom overlaps completely, and the remaining 5 % closely coincides. The results demonstrate that our method achieves 87 and 80 % improvements for the first and third phantoms under the same conditions, respectively. Furthermore, 91 % improvements were obtained for the second phantom under the condition with reduced graph construction complexity.

  14. An improved real-time endovascular guidewire position simulation using shortest path algorithm.

    PubMed

    Qiu, Jianpeng; Qu, Zhiyi; Qiu, Haiquan; Zhang, Xiaomin

    2016-09-01

    In this study, we propose a new graph-theoretical method to simulate guidewire paths inside the carotid artery. The minimum energy guidewire path can be obtained by applying the shortest path algorithm, such as Dijkstra's algorithm for graphs, based on the principle of the minimal total energy. Compared to previous results, experiments of three phantoms were validated, revealing that the first and second phantoms overlap completely between simulated and real guidewires. In addition, 95 % of the third phantom overlaps completely, and the remaining 5 % closely coincides. The results demonstrate that our method achieves 87 and 80 % improvements for the first and third phantoms under the same conditions, respectively. Furthermore, 91 % improvements were obtained for the second phantom under the condition with reduced graph construction complexity. PMID:26467345

  15. Determination of three-dimensional structures of proteins by simulated annealing with interproton distance restraints. Application to crambin, potato carboxypeptidase inhibitor and barley serine proteinase inhibitor 2.

    PubMed

    Nilges, M; Gronenborn, A M; Brünger, A T; Clore, G M

    1988-04-01

    An automated method, based on the principle of simulated annealing, is presented for determining the three-dimensional structures of proteins on the basis of short (less than 5 A) interproton distance data derived from nuclear Overhauser enhancement (NOE) measurements. The method makes use of Newton's equations of motion to increase temporarily the temperature of the system in order to search for the global minimum region of a target function comprising purely geometric restraints. These consist of interproton distances supplemented by bond lengths, bond angles, planes and soft van der Waals repulsion terms. The latter replace the dihedral, van der Waals, electrostatic and hydrogen-bonding potentials of the empirical energy function used in molecular dynamics simulations. The method presented involves the implementation of a number of innovations over our previous restrained molecular dynamics approach [Clore, G.M., Brünger, A.T., Karplus, M. and Gronenborn, A.M. (1986) J. Mol. Biol., 191, 523-551]. These include the development of a new effective potential for the interproton distance restraints whose functional form is dependent on the magnitude of the difference between calculated and target values, and the design and implementation of robust and fully automatic protocol. The method is tested on three systems: the model system crambin (46 residues) using X-ray structure derived interproton distance restraints, and potato carboxypeptidase inhibitor (CPI; 39 residues) and barley serine proteinase inhibitor 2 (BSPI-2; 64 residues) using experimentally derived interproton distance restraints. Calculations were carried out starting from the extended strands which had atomic r.m.s. differences of 57, 38 and 33 A with respect to the crystal structures of BSPI-2, crambin and CPI respectively. Unbiased sampling of the conformational space consistent with the restraints was achieved by varying the random number seed used to assign the initial velocities. This ensures

  16. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  17. A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.

    2005-01-01

    This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.

  18. Effective algorithm for ray-tracing simulations of lobster eye and similar reflective optical systems

    NASA Astrophysics Data System (ADS)

    Tichý, Vladimír; Hudec, René; Němcová, Šárka

    2016-06-01

    The algorithm presented is intended mainly for lobster eye optics. This type of optics (and some similar types) allows for a simplification of the classical ray-tracing procedure that requires great many rays to simulate. The method presented performs the simulation of a only few rays; therefore it is extremely effective. Moreover, to simplify the equations, a specific mathematical formalism is used. Only a few simple equations are used, therefore the program code can be simple as well. The paper also outlines how to apply the method to some other reflective optical systems.

  19. New exclusive CHIPS-TPT algorithms for simulation of neutron-nuclear reactions

    NASA Astrophysics Data System (ADS)

    Kosov, M.; Savin, D.

    2015-05-01

    The CHIPS-TPT physics library for simulation of neutron-nuclear reactions on the new exclusive level is being developed in CFAR VNIIA. The exclusive modeling conserves energy, momentum and quantum numbers in each neutron-nuclear interaction. The CHIPS-TPT algorithms are based on the exclusive CHIPS library, which is compatible with Geant4. Special CHIPS-TPT physics lists in the Geant4 format are provided. The calculation time for an exclusive CHIPS-TPT simulation is comparable to the time of the corresponding Geant4- HP simulation. In addition to the reduction of the deposited energy fluctuations, which is a consequence of the energy conservation, the CHIPS-TPT libraries provide a possibility of simulation of the secondary particles correlation, e.g. secondary gammas, and of the Doppler broadening of gamma lines in the spectrum, which can be measured by germanium detectors.

  20. Simulation and optimization of a pulsating heat pipe using artificial neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad

    2016-11-01

    In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.

  1. Simulation and optimization of a pulsating heat pipe using artificial neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad

    2016-01-01

    In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.

  2. Mesoscale Benchmark Demonstration Problem 1: Mesoscale Simulations of Intra-granular Fission Gas Bubbles in UO2 under Post-irradiation Thermal Annealing

    SciTech Connect

    Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David

    2012-04-11

    A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling

  3. Efficacy of very fast simulated annealing global optimization method for interpretation of self-potential anomaly by different forward formulation over 2D inclined sheet type structure

    NASA Astrophysics Data System (ADS)

    Biswas, A.; Sharma, S. P.

    2012-12-01

    Self-Potential anomaly is an important geophysical technique that measures the electrical potential due natural source of current in the Earth's subsurface. An inclined sheet type model is a very familiar structure associated with mineralization, fault plane, groundwater flow and many other geological features which exhibits self potential anomaly. A number of linearized and global inversion approaches have been developed for the interpretation of SP anomaly over different structures for various purposes. Mathematical expression to compute the forward response over a two-dimensional dipping sheet type structures can be described in three different ways using five variables in each case. Complexities in the inversion using three different forward approaches are different. Interpretation of self-potential anomaly using very fast simulated annealing global optimization has been developed in the present study which yielded a new insight about the uncertainty and equivalence in model parameters. Interpretation of the measured data yields the location of the causative body, depth to the top, extension, dip and quality of the causative body. In the present study, a comparative performance of three different forward approaches in the interpretation of self-potential anomaly is performed to assess the efficacy of the each approach in resolving the possible ambiguity. Even though each forward formulation yields the same forward response but optimization of different sets of variable using different forward problems poses different kinds of ambiguity in the interpretation. Performance of the three approaches in optimization has been compared and it is observed that out of three methods, one approach is best and suitable for this kind of study. Our VFSA approach has been tested on synthetic, noisy and field data for three different methods to show the efficacy and suitability of the best method. It is important to use the forward problem in the optimization that yields the

  4. Automated 3D-2D registration of X-ray microcomputed tomography with histological sections for dental implants in bone using chamfer matching and simulated annealing.

    PubMed

    Becker, Kathrin; Stauber, Martin; Schwarz, Frank; Beißbarth, Tim

    2015-09-01

    We propose a novel 3D-2D registration approach for micro-computed tomography (μCT) and histology (HI), constructed for dental implant biopsies, that finds the position and normal vector of the oblique slice from μCT that corresponds to HI. During image pre-processing, the implants and the bone tissue are segmented using a combination of thresholding, morphological filters and component labeling. After this, chamfer matching is employed to register the implant edges and fine registration of the bone tissues is achieved using simulated annealing. The method was tested on n=10 biopsies, obtained at 20 weeks after non-submerged healing in the canine mandible. The specimens were scanned with μCT 100 and processed for hard tissue sectioning. After registration, we assessed the agreement of bone to implant contact (BIC) using automated and manual measurements. Statistical analysis was conducted to test the agreement of the BIC measurements in the registered samples. Registration was successful for all specimens and agreement of the respective binary images was high (median: 0.90, 1.-3. Qu.: 0.89-0.91). Direct comparison of BIC yielded that automated (median 0.82, 1.-3. Qu.: 0.75-0.85) and manual (median 0.61, 1.-3. Qu.: 0.52-0.67) measures from μCT were significant positively correlated with HI (median 0.65, 1.-3. Qu.: 0.59-0.72) between μCT and HI groups (manual: R(2)=0.87, automated: R(2)=0.75, p<0.001). The results show that this method yields promising results and that μCT may become a valid alternative to assess osseointegration in three dimensions.

  5. Synthetic line-of-sight algorithms for hardware-in-the-loop simulations

    NASA Astrophysics Data System (ADS)

    Richard, Henri; Lowman, Alan; Ballard, Gary

    2005-05-01

    During the flight of guided submunitions, translation of the missile with respect to the designated aimpoint causes a rotation of the Line-of-Sight (LOS) in inertial space. Large transmit arrays or 5 axis CARCO tables are used to perform True LOS (TLOS) for in-band simulations. Both of these TLOS approaches have cost or fidelity issues for RF seekers. Typically RF Hardware-in-the-Loop (HWIL) simulations of these guided submunitions are mounted on a Three Axes Rotational Flight Simulator (TARFS), which is not capable of translation, and utilize a 2 to 3 seeker beam width transmit array. This necessitates using a Synthetic Line-of-Sight (SLOS) algorithm with the TARFS in order to maintain the proper line-of-sight orientation during all phases of flight which typically includes largely varying LOS motion. This paper presents a simple explanation depicting TLOS and SLOS (TARFS) geometry and the seamless boresight/target SLOS algorithm utilized in AMRDEC's RF4 facility for a test article flight profile. In conclusion this paper will summarize the current state of SLOS algorithms utilized at AMRDEC and challenges and possible solutions envisioned in the near future.

  6. Blocking Moving Window algorithm: Conditioning multiple-point simulations to hydrogeological data

    NASA Astrophysics Data System (ADS)

    Alcolea, Andres; Renard, Philippe

    2010-08-01

    Connectivity constraints and measurements of state variables contain valuable information on aquifer architecture. Multiple-point (MP) geostatistics allow one to simulate aquifer architectures, presenting a predefined degree of global connectivity. In this context, connectivity data are often disregarded. The conditioning to state variables is usually carried out by minimizing a suitable objective function (i.e., solving an inverse problem). However, the discontinuous nature of lithofacies distributions and of the corresponding objective function discourages the use of traditional sensitivity-based inversion techniques. This work presents the Blocking Moving Window algorithm (BMW), aimed at overcoming these limitations by conditioning MP simulations to hydrogeological data such as connectivity and heads. The BMW evolves iteratively until convergence: (1) MP simulation of lithofacies from geological/geophysical data and connectivity constraints, where only a random portion of the domain is simulated at every iteration (i.e., the blocking moving window, whose size is user-defined); (2) population of hydraulic properties at the intrafacies; (3) simulation of state variables; and (4) acceptance or rejection of the MP simulation depending on the quality of the fit of measured state variables. The outcome is a stack of MP simulations that (1) resemble a prior geological model depicted by a training image, (2) honor lithological data and connectivity constraints, (3) correlate with geophysical data, and (4) fit available measurements of state variables well. We analyze the performance of the algorithm on a 2-D synthetic example. Results show that (1) the size of the blocking moving window controls the behavior of the BMW, (2) conditioning to state variable data enhances dramatically the initial simulation (which accounts for geological/geophysical data only), and (3) connectivity constraints speed up the convergence but do not enhance the stack if the number of iterations

  7. Real-time dynamics simulation of the Cassini spacecraft using DARTS. Part 1: Functional capabilities and the spatial algebra algorithm

    NASA Technical Reports Server (NTRS)

    Jain, A.; Man, G. K.

    1993-01-01

    This paper describes the Dynamics Algorithms for Real-Time Simulation (DARTS) real-time hardware-in-the-loop dynamics simulator for the National Aeronautics and Space Administration's Cassini spacecraft. The spacecraft model consists of a central flexible body with a number of articulated rigid-body appendages. The demanding performance requirements from the spacecraft control system require the use of a high fidelity simulator for control system design and testing. The DARTS algorithm provides a new algorithmic and hardware approach to the solution of this hardware-in-the-loop simulation problem. It is based upon the efficient spatial algebra dynamics for flexible multibody systems. A parallel and vectorized version of this algorithm is implemented on a low-cost, multiprocessor computer to meet the simulation timing requirements.

  8. A fast algorithm for voxel-based deterministic simulation of X-ray imaging

    NASA Astrophysics Data System (ADS)

    Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2008-04-01

    Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video

  9. Effects of Temperature Control Algorithms on Transport Properties and Kinetics in Molecular Dynamics Simulations.

    PubMed

    Basconi, Joseph E; Shirts, Michael R

    2013-07-01

    Temperature control algorithms in molecular dynamics (MD) simulations are necessary to study isothermal systems. However, these thermostatting algorithms alter the velocities of the particles and thus modify the dynamics of the system with respect to the microcanonical ensemble, which could potentially lead to thermostat-dependent dynamical artifacts. In this study, we investigate how six well-established thermostat algorithms applied with different coupling strengths and to different degrees of freedom affect the dynamics of various molecular systems. We consider dynamic processes occurring on different times scales by measuring translational and rotational self-diffusion as well as the shear viscosity of water, diffusion of a small molecule solvated in water, and diffusion and the dynamic structure factor of a polymer chain in water. All of these properties are significantly dampened by thermostat algorithms which randomize particle velocities, such as the Andersen thermostat and Langevin dynamics, when strong coupling is used. For the solvated small molecule and polymer, these dampening effects are reduced somewhat if the thermostats are applied to the solvent alone, such that the solute's temperature is maintained only through thermal contact with solvent particles. Algorithms which operate by scaling the velocities, such as the Berendsen thermostat, the stochastic velocity rescaling approach of Bussi and co-workers, and the Nosé-Hoover thermostat, yield transport properties that are statistically indistinguishable from those of the microcanonical ensemble, provided they are applied globally, i.e. coupled to the system's kinetic energy. When coupled to local kinetic energies, a velocity scaling thermostat can have dampening effects comparable to a velocity randomizing method, as we observe when a massive Nose-Hoover coupling scheme is used to simulate water. Correct dynamical properties, at least those studied in this paper, are obtained with the Berendsen

  10. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE PAGES

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby

    2016-04-01

    Here we evaluate the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product was developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK andmore » Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Finally, both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  11. An assessment of coupling algorithms for nuclear reactor core physics simulations

    NASA Astrophysics Data System (ADS)

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby

    2016-04-01

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss-Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton-Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.

  12. Parallel Simulation Algorithms for the Three Dimensional Strong-Strong Beam-Beam Interaction

    SciTech Connect

    Kabel, A.C.; /SLAC

    2008-03-17

    The strong-strong beam-beam effect is one of the most important effects limiting the luminosity of ring colliders. Little is known about it analytically, so most studies utilize numeric simulations. The two-dimensional realm is readily accessible to workstation-class computers (cf.,e.g.,[1, 2]), while three dimensions, which add effects such as phase averaging and the hourglass effect, require vastly higher amounts of CPU time. Thus, parallelization of three-dimensional simulation techniques is imperative; in the following we discuss parallelization strategies and describe the algorithms used in our simulation code, which will reach almost linear scaling of performance vs. number of CPUs for typical setups.

  13. MODA: a new algorithm to compute optical depths in multidimensional hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Perego, Albino; Gafton, Emanuel; Cabezón, Rubén; Rosswog, Stephan; Liebendörfer, Matthias

    2014-08-01

    Aims: We introduce the multidimensional optical depth algorithm (MODA) for the calculation of optical depths in approximate multidimensional radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Methods: Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any predefined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we use a tree structure that is otherwise used for searching neighbors and calculating gravity. Results: In a series of numerical experiments, we compare the MODA results with analytically known solutions. We also use snapshots from actual 3D simulations and compare the results of MODA with those obtained with other methods, such as the global and local ray-by-ray method. It turns out that MODA achieves excellent accuracy at a moderate computational cost. In appendix we also discuss implementation details and parallelization strategies.

  14. Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.

  15. A scalable parallel algorithm for large-scale reactive force-field molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Nomura, Ken-ichi; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2008-01-01

    A scalable parallel algorithm has been designed to perform multimillion-atom molecular dynamics (MD) simulations, in which first principles-based reactive force fields (ReaxFF) describe chemical reactions. Environment-dependent bond orders associated with atomic pairs and their derivatives are reused extensively with the aid of linked-list cells to minimize the computation associated with atomic n-tuple interactions ( n⩽4 explicitly and ⩽6 due to chain-rule differentiation). These n-tuple computations are made modular, so that they can be reconfigured effectively with a multiple time-step integrator to further reduce the computation time. Atomic charges are updated dynamically with an electronegativity equalization method, by iteratively minimizing the electrostatic energy with the charge-neutrality constraint. The ReaxFF-MD simulation algorithm has been implemented on parallel computers based on a spatial decomposition scheme combined with distributed n-tuple data structures. The measured parallel efficiency of the parallel ReaxFF-MD algorithm is 0.998 on 131,072 IBM BlueGene/L processors for a 1.01 billion-atom RDX system.

  16. Different genetic algorithms and the evolution of specialization: a study with groups of simulated neural robots.

    PubMed

    Ferrauto, Tomassino; Parisi, Domenico; Di Stefano, Gabriele; Baldassarre, Gianluca

    2013-01-01

    Organisms that live in groups, from microbial symbionts to social insects and schooling fish, exhibit a number of highly efficient cooperative behaviors, often based on role taking and specialization. These behaviors are relevant not only for the biologist but also for the engineer interested in decentralized collective robotics. We address these phenomena by carrying out experiments with groups of two simulated robots controlled by neural networks whose connection weights are evolved by using genetic algorithms. These algorithms and controllers are well suited to autonomously find solutions for decentralized collective robotic tasks based on principles of self-organization. The article first presents a taxonomy of role-taking and specialization mechanisms related to evolved neural network controllers. Then it introduces two cooperation tasks, which can be accomplished by either role taking or specialization, and uses these tasks to compare four different genetic algorithms to evaluate their capacity to evolve a suitable behavioral strategy, which depends on the task demands. Interestingly, only one of the four algorithms, which appears to have more biological plausibility, is capable of evolving role taking or specialization when they are needed. The results are relevant for both collective robotics and biology, as they can provide useful hints on the different processes that can lead to the emergence of specialization in robots and organisms. PMID:23514239

  17. Different genetic algorithms and the evolution of specialization: a study with groups of simulated neural robots.

    PubMed

    Ferrauto, Tomassino; Parisi, Domenico; Di Stefano, Gabriele; Baldassarre, Gianluca

    2013-01-01

    Organisms that live in groups, from microbial symbionts to social insects and schooling fish, exhibit a number of highly efficient cooperative behaviors, often based on role taking and specialization. These behaviors are relevant not only for the biologist but also for the engineer interested in decentralized collective robotics. We address these phenomena by carrying out experiments with groups of two simulated robots controlled by neural networks whose connection weights are evolved by using genetic algorithms. These algorithms and controllers are well suited to autonomously find solutions for decentralized collective robotic tasks based on principles of self-organization. The article first presents a taxonomy of role-taking and specialization mechanisms related to evolved neural network controllers. Then it introduces two cooperation tasks, which can be accomplished by either role taking or specialization, and uses these tasks to compare four different genetic algorithms to evaluate their capacity to evolve a suitable behavioral strategy, which depends on the task demands. Interestingly, only one of the four algorithms, which appears to have more biological plausibility, is capable of evolving role taking or specialization when they are needed. The results are relevant for both collective robotics and biology, as they can provide useful hints on the different processes that can lead to the emergence of specialization in robots and organisms.

  18. Design and simulation of imaging algorithm for Fresnel telescopy imaging system

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-yu; Liu, Li-ren; Yan, Ai-min; Sun, Jian-feng; Dai, En-wen; Li, Bing

    2011-06-01

    Fresnel telescopy (short for Fresnel telescopy full-aperture synthesized imaging ladar) is a new high resolution active laser imaging technique. This technique is a variant of Fourier telescopy and optical scanning holography, which uses Fresnel zone plates to scan target. Compare with synthetic aperture imaging ladar(SAIL), Fresnel telescopy avoids problem of time synchronization and space synchronization, which decreasing technical difficulty. In one-dimensional (1D) scanning operational mode for moving target, after time-to-space transformation, spatial distribution of sampling data is non-uniform because of the relative motion between target and scanning beam. However, as we use fast Fourier transform (FFT) in the following imaging algorithm of matched filtering, distribution of data should be regular and uniform. We use resampling interpolation to transform the data into two-dimensional (2D) uniform distribution, and accuracy of resampling interpolation process mainly affects the reconstruction results. Imaging algorithms with different resampling interpolation algorithms have been analysis and computer simulation are also given. We get good reconstruction results of the target, which proves that the designed imaging algorithm for Fresnel telescopy imaging system is effective. This work is found to have substantial practical value and offers significant benefit for high resolution imaging system of Fresnel telescopy laser imaging ladar.

  19. Generalized SIMD algorithm for efficient EM-PIC simulations on modern CPUs

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo; Decyk, Viktor; Mori, Warren; Silva, Luis

    2012-10-01

    There are several relevant plasma physics scenarios where highly nonlinear and kinetic processes dominate. Further understanding of these scenarios is generally explored through relativistic particle-in-cell codes such as OSIRIS [1], but this algorithm is computationally intensive, and efficient use high end parallel HPC systems, exploring all levels of parallelism available, is required. In particular, most modern CPUs include a single-instruction-multiple-data (SIMD) vector unit that can significantly speed up the calculations. In this work we present a generalized PIC-SIMD algorithm that is shown to work efficiently with different CPU (AMD, Intel, IBM) and vector unit types (2-8 way, single/double). Details on the algorithm will be given, including the vectorization strategy and memory access. We will also present performance results for the various hardware variants analyzed, focusing on floating point efficiency. Finally, we will discuss the applicability of this type of algorithm for EM-PIC simulations on GPGPU architectures [2]. [4pt] [1] R. A. Fonseca et al., LNCS 2331, 342, (2002)[0pt] [2] V. K. Decyk, T. V. Singh; Comput. Phys. Commun. 182, 641-648 (2011)

  20. Exclusive CHIPS-TPT algorithms for simulation of neutron-nuclear reactions

    NASA Astrophysics Data System (ADS)

    Kosov, Mikhail; Savin, Dmitriy

    2016-09-01

    The CHIPS-TPT physics library for simulation of neutron-nuclear reactions on the new exclusive level is being developed in CFAR VNIIA. The exclusive modeling conserves energy, momentum and quantum numbers in each neutron-nuclear interaction. The CHIPS-TPT algorithms are based on the exclusive CHIPS library, which is compatible with Geant4. Special CHIPS-TPT physics lists in the Geant4 format are provided. The calculation time for an exclusive CHIPS-TPT simulation is comparable to the time of the corresponding inclusive Geant4-HP simulation and much faster for mono-isotopic simulations. In addition to the reduction of the deposited energy fluctuations, which is a consequence of the energy conservation, the CHIPS-TPT libraries provide a possibility of simulation of the secondary particles correlation, e.g. secondary gammas or n-γ correlations, and of the Doppler broadening of the γ-lines in the simulated spectra, which can be measured by germanium detectors.

  1. Assessment of Rainfall-Runoff Simulation Model Based on Satellite Algorithm

    NASA Astrophysics Data System (ADS)

    Nemati, A. R.; Zakeri Niri, M.; Moazami, S.

    2015-12-01

    Simulation of rainfall-runoff process is one of the most important research fields in hydrology and water resources. Generally, the models used in this section are divided into two conceptual and data-driven categories. In this study, a conceptual model and two data-driven models have been used to simulate rainfall-runoff process in Tamer sub-catchment located in Gorganroud watershed in Iran. The conceptual model used is HEC-HMS, and data-driven models are neural network model of multi-layer Perceptron (MLP) and support vector regression (SVR). In addition to simulation of rainfall-runoff process using the recorded land precipitation, the performance of four satellite algorithms of precipitation, that is, CMORPH, PERSIANN, TRMM 3B42 and TRMM 3B42RT were studied. In simulation of rainfall-runoff process, calibration and accuracy of the models were done based on satellite data. The results of the research based on three criteria of correlation coefficient (R), root mean square error (RMSE) and mean absolute error (MAE) showed that in this part the two models of SVR and MLP could perform the simulation of runoff in a relatively appropriate way, but in simulation of the maximum values of the flow, the error of models increased.

  2. Parallel two-level domain decomposition based Jacobi-Davidson algorithms for pyramidal quantum dot simulation

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Hwang, Feng-Nan; Cai, Xiao-Chuan

    2016-07-01

    We consider a quintic polynomial eigenvalue problem arising from the finite volume discretization of a quantum dot simulation problem. The problem is solved by the Jacobi-Davidson (JD) algorithm. Our focus is on how to achieve the quadratic convergence of JD in a way that is not only efficient but also scalable when the number of processor cores is large. For this purpose, we develop a projected two-level Schwarz preconditioned JD algorithm that exploits multilevel domain decomposition techniques. The pyramidal quantum dot calculation is carefully studied to illustrate the efficiency of the proposed method. Numerical experiments confirm that the proposed method has a good scalability for problems with hundreds of millions of unknowns on a parallel computer with more than 10,000 processor cores.

  3. New Algorithms for Computing the Time-to-Collision in Freeway Traffic Simulation Models

    PubMed Central

    Hou, Jia; List, George F.; Guo, Xiucheng

    2014-01-01

    Ways to estimate the time-to-collision are explored. In the context of traffic simulation models, classical lane-based notions of vehicle location are relaxed and new, fast, and efficient algorithms are examined. With trajectory conflicts being the main focus, computational procedures are explored which use a two-dimensional coordinate system to track the vehicle trajectories and assess conflicts. Vector-based kinematic variables are used to support the calculations. Algorithms based on boxes, circles, and ellipses are considered. Their performance is evaluated in the context of computational complexity and solution time. Results from these analyses suggest promise for effective and efficient analyses. A combined computation process is found to be very effective. PMID:25628650

  4. VFSARES-a very fast simulated annealing FORTRAN program for interpretation of 1-D DC resistivity sounding data from various electrode arrays

    NASA Astrophysics Data System (ADS)

    Sharma, Shashi Prakash

    2012-05-01

    Employing the very fast simulated annealing (VFSA) global optimization technique, a FORTRAN program is developed for the interpretation of one-dimensional direct current resistivity sounding data from various electrode arrays. The VFSA optimization depicts various good fitting solutions (models) after analyzing a large number of models within a predefined model space. Various models that yield reasonably well fitting responses with the observed response lie along a narrow elongated region of the model space. Therefore, instead of selecting the global model on the basis of the lowest misfit error, it is better to analyze histograms and probability density functions (PDFs) of such models for depicting the global model. In a multidimensional model space, the most appropriate region to select suitable models to compute the mean model is the one in which the PDF is larger in comparison to the other regions of the model space. Initially, accepted models with misfit errors less than the predefined threshold value are selected and lognormal PDFs for each model parameter are computed. Subsequently, mean model and uncertainties are computed using the models in which each model parameter has a PDF more than the defined threshold value (>68.2%). The mean model computed from such models is very close to the actual subsurface structure (global model). It is observed that the mean model computed using models with a PDF more than 95% for each model parameters yields the actual model. Moreover uncertainty computed using models with such a high PDF and lying in a small model space will be small and it will not be considered as the actual global uncertainty. Resistivity sounding (synthetic and field) data over different subsurface structures are optimized using the VFSA program developed in the present study. Optimization results reveal that the actual model always locates within the estimated uncertainty in the mean model. Since the approach requires much less computing time (a few

  5. A Linked Simulation-Optimization (LSO) Model for Conjunctive Irrigation Management using Clonal Selection Algorithm

    NASA Astrophysics Data System (ADS)

    Islam, Sirajul; Talukdar, Bipul

    2016-08-01

    A Linked Simulation-Optimization (LSO) model based on a Clonal Selection Algorithm (CSA) was formulated for application in conjunctive irrigation management. A series of measures were considered for reducing the computational burden associated with the LSO approach. Certain modifications were incurred to the formulated CSA, so as to decrease the number of function evaluations. In addition, a simple problem specific code for a two dimensional groundwater flow simulation model was developed. The flow model was further simplified by a novel approach of area reduction, in order to save computational time in simulation. The LSO model was applied in the irrigation command of the Pagladiya Dam Project in Assam, India. With a view to evaluate the performance of the CSA, a Genetic Algorithm (GA) was used as a comparison base. The results from the CSA compared well with those from the GA. In fact, the CSA was found to consume less computational time than the GA while converging to the optimal solution, due to the modifications incurred in it.

  6. A Linked Simulation-Optimization (LSO) Model for Conjunctive Irrigation Management using Clonal Selection Algorithm

    NASA Astrophysics Data System (ADS)

    Islam, Sirajul; Talukdar, Bipul

    2016-09-01

    A Linked Simulation-Optimization (LSO) model based on a Clonal Selection Algorithm (CSA) was formulated for application in conjunctive irrigation management. A series of measures were considered for reducing the computational burden associated with the LSO approach. Certain modifications were incurred to the formulated CSA, so as to decrease the number of function evaluations. In addition, a simple problem specific code for a two dimensional groundwater flow simulation model was developed. The flow model was further simplified by a novel approach of area reduction, in order to save computational time in simulation. The LSO model was applied in the irrigation command of the Pagladiya Dam Project in Assam, India. With a view to evaluate the performance of the CSA, a Genetic Algorithm (GA) was used as a comparison base. The results from the CSA compared well with those from the GA. In fact, the CSA was found to consume less computational time than the GA while converging to the optimal solution, due to the modifications incurred in it.

  7. A parallel algorithm for transient solid dynamics simulations with contact detection

    SciTech Connect

    Attaway, S.; Hendrickson, B.; Plimpton, S.; Gardner, D.; Vaughan, C.; Heinstein, M.; Peery, J.

    1996-06-01

    Solid dynamics simulations with Lagrangian finite elements are used to model a wide variety of problems, such as the calculation of impact damage to shipping containers for nuclear waste and the analysis of vehicular crashes. Using parallel computers for these simulations has been hindered by the difficulty of searching efficiently for material surface contacts in parallel. A new parallel algorithm for calculation of arbitrary material contacts in finite element simulations has been developed and implemented in the PRONTO3D transient solid dynamics code. This paper will explore some of the issues involved in developing efficient, portable, parallel finite element models for nonlinear transient solid dynamics simulations. The contact-detection problem poses interesting challenges for efficient implementation of a solid dynamics simulation on a parallel computer. The finite element mesh is typically partitioned so that each processor owns a localized region of the finite element mesh. This mesh partitioning is optimal for the finite element portion of the calculation since each processor must communicate only with the few connected neighboring processors that share boundaries with the decomposed mesh. However, contacts can occur between surfaces that may be owned by any two arbitrary processors. Hence, a global search across all processors is required at every time step to search for these contacts. Load-imbalance can become a problem since the finite element decomposition divides the volumetric mesh evenly across processors but typically leaves the surface elements unevenly distributed. In practice, these complications have been limiting factors in the performance and scalability of transient solid dynamics on massively parallel computers. In this paper the authors present a new parallel algorithm for contact detection that overcomes many of these limitations.

  8. Transient dynamics simulations: Parallel algorithms for contact detection and smoothed particle hydrodynamics

    SciTech Connect

    Hendrickson, B.; Plimpton, S.; Attaway, S.; Swegle, J.

    1996-09-01

    Transient dynamics simulations are commonly used to model phenomena such as car crashes, underwater explosions, and the response of shipping containers to high-speed impacts. Physical objects in such a simulation are typically represented by Lagrangian meshes because the meshes can move and deform with the objects as they undergo stress. Fluids (gasoline, water) or fluid-like materials (earth) in the simulation can be modeled using the techniques of smoothed particle hydrodynamics. Implementing a hybrid mesh/particle model on a massively parallel computer poses several difficult challenges. One challenge is to simultaneously parallelize and load-balance both the mesh and particle portions of the computation. A second challenge is to efficiently detect the contacts that occur within the deforming mesh and between mesh elements and particles as the simulation proceeds. These contacts impart forces to the mesh elements and particles which must be computed at each timestep to accurately capture the physics of interest. In this paper we describe new parallel algorithms for smoothed particle hydrodynamics and contact detection which turn out to have several key features in common. Additionally, we describe how to join the new algorithms with traditional parallel finite element techniques to create an integrated particle/mesh transient dynamics simulation. Our approach to this problem differs from previous work in that we use three different parallel decompositions, a static one for the finite element analysis and dynamic ones for particles and for contact detection. We have implemented our ideas in a parallel version of the transient dynamics code PRONTO-3D and present results for the code running on a large Intel Paragon.

  9. A simulation based method to assess inversion algorithms for transverse relaxation data

    NASA Astrophysics Data System (ADS)

    Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong

    2008-04-01

    NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.

  10. Fokker-Planck-DSMC algorithm for simulations of rarefied gas flows

    NASA Astrophysics Data System (ADS)

    Gorji, M. Hossein; Jenny, Patrick

    2015-04-01

    A Fokker-Planck based particle Monte Carlo algorithm was devised recently for simulations of rarefied gas flows by the authors [1-3]. The main motivation behind the Fokker-Planck (FP) model is computational efficiency, which could be gained due to the fact that the resulting stochastic processes are continuous in velocity space. This property of the model leads to simulations where the computational cost becomes independent of the Knudsen number (Kn) [3]. However, the Fokker-Planck model which can be seen as a diffusion approximation of the Boltzmann equation, becomes less accurate as Kn increases. In this study we propose a hybrid Fokker-Planck-Direct Simulation Monte Carlo (FP-DSMC) solution method, which is applicable for the whole range of Kn. The objective of this algorithm is to retain the efficiency of the FP scheme at low Kn (Kn ≪ 1) and to employ conventional DSMC at high Kn (Kn ≫ 1). Since the computational particles employed by the FP model represent the same data as in DSMC, the coupling between the two methods is straightforward. The new ingredient is a switching criterion which would ideally result in a hybrid scheme with the efficiency of the FP method and the accuracy of DSMC for the whole Kn-range. Here, we adopt the number of collisions in a given computational cell and for a given time step size as a decision criterion in order to switch between the FP model and DSMC. For assessment of the hybrid algorithm, different test cases including flow impingement and flow expansion through a slit were studied. Both accuracy and efficiency of the model are shown to be excellent for the presented test cases.

  11. A simulation environment for modeling and development of algorithms for ensembles of mobile microsystems

    NASA Astrophysics Data System (ADS)

    Fink, Jonathan; Collins, Tom; Kumar, Vijay; Mostofi, Yasamin; Baras, John; Sadler, Brian

    2009-05-01

    The vision for the Micro Autonomous Systems Technologies MAST programis to develop autonomous, multifunctional, collaborative ensembles of agile, mobile microsystems to enhance tactical situational awareness in urban and complex terrain for small unit operations. Central to this vision is the ability to have multiple, heterogeneous autonomous assets to function as a single cohesive unit, that is adaptable, responsive to human commands and resilient to adversarial conditions. This paper represents an effort to develop a simulation environment for studying control, sensing, communication, perception, and planning methodologies and algorithms.

  12. Application of Genetic Algorithms in the New Air Ttraffic Management Simulation System

    NASA Astrophysics Data System (ADS)

    Guo, Hang

    The air traffic control systems are facing more and more serious congestions because of the increasing of air traffic flow in China. To solve the problem we have developed a New Air Traffic Management Simulation System that is according to the ideology of the New Air Traffic Management and the concept of Free Flight. First this paper analyses the mass design idea and the module functions, and then use the genetic algorithms to give the detail methods to solve the airline conflicts on airlines and aircraft sequence takeoff-landfall sorting schedule in the terminal airport area at last we has achieved anticipative effect by use stimulant data compute in the system

  13. Parallel simulations of Grover's algorithm for closest match search in neutron monitor data

    NASA Astrophysics Data System (ADS)

    Kussainov, Arman; White, Yelena

    We are studying the parallel implementations of Grover's closest match search algorithm for neutron monitor data analysis. This includes data formatting, and matching quantum parameters to a conventional structure of a chosen programming language and selected experimental data type. We have employed several workload distribution models based on acquired data and search parameters. As a result of these simulations, we have an understanding of potential problems that may arise during configuration of real quantum computational devices and the way they could run tasks in parallel. The work was supported by the Science Committee of the Ministry of Science and Education of the Republic of Kazakhstan Grant #2532/GF3.

  14. AVR microcontroller simulator for software implemented hardware fault tolerance algorithms research

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam; Tarnowski, Szymon; Napieralski, Andrzej

    2008-01-01

    Reliability of new, advanced electronic systems becomes a serious problem especially in places like accelerators and synchrotrons, where sophisticated digital devices operate closely to radiation sources. One of the possible solutions to harden the microprocessor-based system is a strict programming approach known as the Software Implemented Hardware Fault Tolerance. Unfortunately, in real environments it is not possible to perform precise and accurate tests of the new algorithms due to hardware limitation. This paper highlights the AVR-family microcontroller simulator project equipped with an appropriate monitoring and the SEU injection systems.

  15. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    NASA Astrophysics Data System (ADS)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  16. Quantifying the Influence of Search Algorithm Uncertainty on the Estimated Parameters of Environmental Models

    NASA Astrophysics Data System (ADS)

    Aziz, S.; Matott, L.

    2012-12-01

    The uncertain parameters of a given environmental model are often inferred from an optimization procedure that seeks to minimize discrepancies between simulated output and observed data. However, optimization search procedures can potentially yield different results across multiple calibration trials. For example, global search procedures like the genetic algorithm and simulated annealing are driven by inherent randomness that can result in variable inter-trial behavior. Despite this potential for variability in search algorithm performance, practitioners are reluctant to run multiple trials of an algorithm because of the added computational burden. As a result, estimated parameters are subject to an unrecognized source of uncertainty that could potentially bias or contaminate subsequent predictive analyses. In this study, a series of numerical experiments were performed to explore the influence of search algorithm uncertainty on parameter estimates. The experiments applied multiple trials of the simulated annealing algorithm to a suite of calibration problems involving watershed rainfall-runoff, groundwater flow, and subsurface contaminant transport. Results suggest that linking the simulated annealing algorithm with an adaptive range-reduction technique can significantly improve algorithm effectiveness while simultaneously reducing inter-trial variability. Therefore these range-reduction procedures appear to be a suitable mechanism for minimizing algorithm variance and improving the consistency of parameter estimates.

  17. Performance Simulation for Unit-memory Convolutional Codes with Byte-oriented Viterbi Decoding Algorithm

    NASA Technical Reports Server (NTRS)

    Vo, Q. D.

    1984-01-01

    A software package developed to simulate the performance of the byte-oriented Viterbi decoding algorithm for unit-memory (UM) codes on both 3-bit and 4-bit quantized AWGN channels is described. The simulation is shown to require negligible memory and less time than that for the RTMBEP algorith, although they both provide similar performance in terms of symbol-error probability. This makes it possible to compute the symbol-error probability of large codes and to determine the signal-to-noise ratio required to achieve a bit error rate (BER) of 0.000001 for corresponding concatenated systems. A (7, 10/48) UM code, 10-bit Reed-Solomon code combination achieves the required BER at 1.08 dB for a 3-bit quantized channel and at 0.91 dB for a 4-bit quantized channel.

  18. Worm Algorithm simulations of the hole dynamics in the t-J model

    NASA Astrophysics Data System (ADS)

    Prokof'ev, Nikolai; Ruebenacker, Oliver

    2001-03-01

    In the limit of small J << t, relevant for HTSC materials and Mott-Hubbard systems, computer simulations have to be performed for large systems and at low temperatures. Despite convincing evidence against spin-charge separation obtained by various methods for J > 0.4t there is an ongoing argument that at smaller J spin-charge separation is still possible. Worm algorithm Monte Carlo simulations of the hole Green function for 0.1 < J/t < 0.4 were performed on lattices with up to 32x32 sites, and at temperature J/T = 40 (for the largest size). Spectral analysis reveals a single, delta-function sharp quasiparticle peak at the lowest edge of the spectrum and two distinct peaks above it at all studied J. We rule out the possibility of spin-charge separation in this parameter range, and present, apparently, the hole spectral function in the thermodynamic limit.

  19. Quantum mechanical NMR simulation algorithm for protein-size spin systems

    NASA Astrophysics Data System (ADS)

    Edwards, Luke J.; Savostyanov, D. V.; Welderufael, Z. T.; Lee, Donghan; Kuprov, Ilya

    2014-06-01

    Nuclear magnetic resonance spectroscopy is one of the few remaining areas of physical chemistry for which polynomially scaling quantum mechanical simulation methods have not so far been available. In this communication we adapt the restricted state space approximation to protein NMR spectroscopy and illustrate its performance by simulating common 2D and 3D liquid state NMR experiments (including accurate description of relaxation processes using Bloch-Redfield-Wangsness theory) on isotopically enriched human ubiquitin - a protein containing over a thousand nuclear spins forming an irregular polycyclic three-dimensional coupling lattice. The algorithm uses careful tailoring of the density operator space to only include nuclear spin states that are populated to a significant extent. The reduced state space is generated by analysing spin connectivity and decoherence properties: rapidly relaxing states as well as correlations between topologically remote spins are dropped from the basis set.

  20. Bayesian parameter inference and model selection by population annealing in systems biology.

    PubMed

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named "posterior parameter ensemble". We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor.

  1. Simulation of Long Lived Tracers Using an Improved Empirically Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal

  2. Evaluation of algorithms for microperfusion assessment by fast simulations of laser Doppler power spectral density.

    PubMed

    Wojtkiewicz, S; Liebert, A; Rix, H; Maniewski, R

    2011-12-21

    In classical laser Doppler (LD) perfusion measurements, zeroth- and first-order moments of the power spectral density of the LD signal are utilized for the calculation of a signal corresponding to the concentration, speed and flow of red blood cells (RBCs). We have analysed the nonlinearities of the moments in relation to RBC speed distributions, parameters of filters utilized in LD instruments and the signal-to-noise ratio. We have developed a new method for fast simulation of the spectrum of the LD signal. The method is based on a superposition of analytically calculated Doppler shift probability distributions derived for the assumed light scattering phase function. We have validated the method by a comparison of the analytically calculated spectra with results of Monte Carlo (MC) simulations. For the semi-infinite, homogeneous medium and the single Doppler scattering regime, the analytical calculation describes LD spectra with the same accuracy as the MC simulation. The method allows for simulating the LD signal in time domain and furthermore analysing the index of perfusion for the assumed wavelength of the light, optical properties of the tissue and concentration of RBCs. Fast simulations of the LD signal in time domain and its frequency spectrum can be utilized in applications where knowledge of the LD photocurrent is required, e.g. in the development of detectors for tissue microperfusion monitoring or in measurements of the LD autocorrelation function for perfusion measurements. The presented fast method for LD spectra calculation can be used as a tool for evaluation of signal processing algorithms used in the LD method and/or for the development of new algorithms of the LD flowmetry and imaging. We analysed LD spectra obtained by analytical calculations using a classical algorithm applied in classical LD perfusion measurements. We observed nonlinearity of the first moment M₁ for low and high speeds of particles (v < 2 mm s⁻¹, v > 10 mm s⁻¹). It was

  3. Evaluation of algorithms for microperfusion assessment by fast simulations of laser Doppler power spectral density

    NASA Astrophysics Data System (ADS)

    Wojtkiewicz, S.; Liebert, A.; Rix, H.; Maniewski, R.

    2011-12-01

    In classical laser Doppler (LD) perfusion measurements, zeroth- and first-order moments of the power spectral density of the LD signal are utilized for the calculation of a signal corresponding to the concentration, speed and flow of red blood cells (RBCs). We have analysed the nonlinearities of the moments in relation to RBC speed distributions, parameters of filters utilized in LD instruments and the signal-to-noise ratio. We have developed a new method for fast simulation of the spectrum of the LD signal. The method is based on a superposition of analytically calculated Doppler shift probability distributions derived for the assumed light scattering phase function. We have validated the method by a comparison of the analytically calculated spectra with results of Monte Carlo (MC) simulations. For the semi-infinite, homogeneous medium and the single Doppler scattering regime, the analytical calculation describes LD spectra with the same accuracy as the MC simulation. The method allows for simulating the LD signal in time domain and furthermore analysing the index of perfusion for the assumed wavelength of the light, optical properties of the tissue and concentration of RBCs. Fast simulations of the LD signal in time domain and its frequency spectrum can be utilized in applications where knowledge of the LD photocurrent is required, e.g. in the development of detectors for tissue microperfusion monitoring or in measurements of the LD autocorrelation function for perfusion measurements. The presented fast method for LD spectra calculation can be used as a tool for evaluation of signal processing algorithms used in the LD method and/or for the development of new algorithms of the LD flowmetry and imaging. We analysed LD spectra obtained by analytical calculations using a classical algorithm applied in classical LD perfusion measurements. We observed nonlinearity of the first moment M1 for low and high speeds of particles (v < 2 mm s-1, v > 10 mm s-1). It was also

  4. A Multirate Variable-timestep Algorithm for N-body Solar System Simulations with Collisions

    NASA Astrophysics Data System (ADS)

    Sharp, P. W.; Newman, W. I.

    2016-03-01

    We present and analyze the performance of a new algorithm for performing accurate simulations of the solar system when collisions between massive bodies and test particles are permitted. The orbital motion of all bodies at all times is integrated using a high-order variable-timestep explicit Runge-Kutta Nyström (ERKN) method. The variation in the timestep ensures that the orbital motion of test particles on eccentric orbits or close to the Sun is calculated accurately. The test particles are divided into groups and each group is integrated using a different sequence of timesteps, giving a multirate algorithm. The ERKN method uses a high-order continuous approximation to the position and velocity when checking for collisions across a step. We give a summary of the extensive testing of our algorithm. In our largest simulation—that of the Sun, the planets Earth to Neptune and 100,000 test particles over 100 million years—the relative error in the energy after 100 million years was of the order of 10-11.

  5. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-01

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus. PMID:26575558

  6. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-01

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  7. Developing a Moving-Solid Algorithm for Simulating Tsunamis Induced by Rock Sliding

    NASA Astrophysics Data System (ADS)

    Chuang, M.; Wu, T.; Huang, C.; Wang, C.; Chu, C.; Chen, M.

    2012-12-01

    The landslide generated tsunami is one of the most devastating nature hazards. However, the involvement of the moving obstacle and dynamic free-surface movement makes the numerical simulation a difficult task. To describe the fluid motion, we use modified two-step projection method to decouple the velocity and pressure fields with 3D LES turbulent model. The free-surface movement is tracked by volume of fluid (VOF) method (Wu, 2004). To describe the effect from the moving obstacle on the fluid, a newly developed moving-solid algorithm (MSA) is developed. We combine the ideas from immersed boundary method (IBM) and partial-cell treatment (PCT) for specifying the contacting speed on the solid face and for presenting the obstacle blocking effect, respectively. By using the concept of IBM, the cell-center and cell-face velocities can be specified arbitrarily. And because we move the solid obstacle on a fixed grid, the boundary of the solid seldom coincides with the cell faces, which makes it inappropriate to assign the solid boundary velocity to the cell faces. To overcome this problem, the PCT is adopted. Using this algorithm, the solid surface is conceptually coincided with the cell faces, and the cell face velocity is able to be specified as the obstacle velocity. The advantage of using this algorithm is obtaining the stable pressure field which is extremely important for coupling with a force-balancing model which describes the solid motion. This model is therefore able to simulate incompressible high-speed fluid motion. In order to describe the solid motion, the DEM (Discrete Element Method) is adopted. The new-time solid movement can be predicted and divided into translation and rotation based on the Newton's equations and Euler's equations respectively. The detail of the moving-solid algorithm is presented in this paper. This model is then applied to studying the rock-slide generated tsunami. The results are validated with the laboratory data (Liu and Wu, 2005

  8. Linear-scaling source-sink algorithm for simulating time-resolved quantum transport and superconductivity

    NASA Astrophysics Data System (ADS)

    Weston, Joseph; Waintal, Xavier

    2016-04-01

    We report on a "source-sink" algorithm which allows one to calculate time-resolved physical quantities from a general nanoelectronic quantum system (described by an arbitrary time-dependent quadratic Hamiltonian) connected to infinite electrodes. Although mathematically equivalent to the nonequilibrium Green's function formalism, the approach is based on the scattering wave functions of the system. It amounts to solving a set of generalized Schrödinger equations that include an additional "source" term (coming from the time-dependent perturbation) and an absorbing "sink" term (the electrodes). The algorithm execution time scales linearly with both system size and simulation time, allowing one to simulate large systems (currently around 106 degrees of freedom) and/or large times (currently around 105 times the smallest time scale of the system). As an application we calculate the current-voltage characteristics of a Josephson junction for both short and long junctions, and recover the multiple Andreev reflection physics. We also discuss two intrinsically time-dependent situations: the relaxation time of a Josephson junction after a quench of the voltage bias, and the propagation of voltage pulses through a Josephson junction. In the case of a ballistic, long Josephson junction, we predict that a fast voltage pulse creates an oscillatory current whose frequency is controlled by the Thouless energy of the normal part. A similar effect is found for short junctions; a voltage pulse produces an oscillating current which, in the absence of electromagnetic environment, does not relax.

  9. Simulation of the Predictive Control Algorithm for Container Crane Operation using Matlab Fuzzy Logic Tool Box

    NASA Technical Reports Server (NTRS)

    Richardson, Albert O.

    1997-01-01

    This research has investigated the use of fuzzy logic, via the Matlab Fuzzy Logic Tool Box, to design optimized controller systems. The engineering system for which the controller was designed and simulate was the container crane. The fuzzy logic algorithm that was investigated was the 'predictive control' algorithm. The plant dynamics of the container crane is representative of many important systems including robotic arm movements. The container crane that was investigated had a trolley motor and hoist motor. Total distance to be traveled by the trolley was 15 meters. The obstruction height was 5 meters. Crane height was 17.8 meters. Trolley mass was 7500 kilograms. Load mass was 6450 kilograms. Maximum trolley and rope velocities were 1.25 meters per sec. and 0.3 meters per sec., respectively. The fuzzy logic approach allowed the inclusion, in the controller model, of performance indices that are more effectively defined in linguistic terms. These include 'safety' and 'cargo swaying'. Two fuzzy inference systems were implemented using the Matlab simulation package, namely the Mamdani system (which relates fuzzy input variables to fuzzy output variables), and the Sugeno system (which relates fuzzy input variables to crisp output variable). It is found that the Sugeno FIS is better suited to including aspects of those plant dynamics whose mathematical relationships can be determined.

  10. Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak

    2010-01-01

    Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions

  11. Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII

    SciTech Connect

    McKinney, Gregg W

    2012-07-17

    Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.

  12. An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models

    PubMed Central

    Wise, S.M.; Lowengrub, J.S.; Cristini, V.

    2010-01-01

    In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663

  13. An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models.

    PubMed

    Wise, S M; Lowengrub, J S; Cristini, V

    2011-01-01

    In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663

  14. Numerical stability of relativistic beam multidimensional PIC simulations employing the Esirkepov algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc

    2013-09-01

    Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.

  15. A novel parallel-rotation algorithm for atomistic Monte Carlo simulation of dense polymer systems

    NASA Astrophysics Data System (ADS)

    Santos, S.; Suter, U. W.; Müller, M.; Nievergelt, J.

    2001-06-01

    We develop and test a new elementary Monte Carlo move for use in the off-lattice simulation of polymer systems. This novel Parallel-Rotation algorithm (ParRot) permits moving very efficiently torsion angles that are deeply inside long chains in melts. The parallel-rotation move is extremely simple and is also demonstrated to be computationally efficient and appropriate for Monte Carlo simulation. The ParRot move does not affect the orientation of those parts of the chain outside the moving unit. The move consists of a concerted rotation around four adjacent skeletal bonds. No assumption is made concerning the backbone geometry other than that bond lengths and bond angles are held constant during the elementary move. Properly weighted sampling techniques are needed for ensuring detailed balance because the new move involves a correlated change in four degrees of freedom along the chain backbone. The ParRot move is supplemented with the classical Metropolis Monte Carlo, the Continuum-Configurational-Bias, and Reptation techniques in an isothermal-isobaric Monte Carlo simulation of melts of short and long chains. Comparisons are made with the capabilities of other Monte Carlo techniques to move the torsion angles in the middle of the chains. We demonstrate that ParRot constitutes a highly promising Monte Carlo move for the treatment of long polymer chains in the off-lattice simulation of realistic models of dense polymer systems.

  16. An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys

    SciTech Connect

    Becker, R; Stolken, J; Jannetti, C; Bassani, J

    2003-10-16

    Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numerical simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.

  17. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations.

    PubMed

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya

    2014-05-14

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10(6)-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of

  18. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    PubMed Central

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism

  19. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya

    2014-05-01

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 106-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques

  20. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  1. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations.

    PubMed

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya

    2014-05-14

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10(6)-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of

  2. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    SciTech Connect

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya; Kunaseth, Manaschai; Ohmura, Satoshi; Shimamura, Kohei

    2014-05-14

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of

  3. Distributed genetic algorithms for the floorplan design problem

    NASA Technical Reports Server (NTRS)

    Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.

    1991-01-01

    Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.

  4. Monte Carlo simulation using the PENELOPE code with an ant colony algorithm to study MOSFET detectors

    NASA Astrophysics Data System (ADS)

    Carvajal, M. A.; García-Pareja, S.; Guirado, D.; Vilches, M.; Anguiano, M.; Palma, A. J.; Lallena, A. M.

    2009-10-01

    In this work we have developed a simulation tool, based on the PENELOPE code, to study the response of MOSFET devices to irradiation with high-energy photons. The energy deposited in the extremely thin silicon dioxide layer has been calculated. To reduce the statistical uncertainties, an ant colony algorithm has been implemented to drive the application of splitting and Russian roulette as variance reduction techniques. In this way, the uncertainty has been reduced by a factor of ~5, while the efficiency is increased by a factor of above 20. As an application, we have studied the dependence of the response of the pMOS transistor 3N163, used as a dosimeter, with the incidence angle of the radiation for three common photons sources used in radiotherapy: a 60Co Theratron-780 and the 6 and 18 MV beams produced by a Mevatron KDS LINAC. Experimental and simulated results have been obtained for gantry angles of 0o, 15o, 30o, 45o, 60o and 75o. The agreement obtained has permitted validation of the simulation tool. We have studied how to reduce the angular dependence of the MOSFET response by using an additional encapsulation made of brass in the case of the two LINAC qualities considered.

  5. Monte Carlo simulation using the PENELOPE code with an ant colony algorithm to study MOSFET detectors.

    PubMed

    Carvajal, M A; García-Pareja, S; Guirado, D; Vilches, M; Anguiano, M; Palma, A J; Lallena, A M

    2009-10-21

    In this work we have developed a simulation tool, based on the PENELOPE code, to study the response of MOSFET devices to irradiation with high-energy photons. The energy deposited in the extremely thin silicon dioxide layer has been calculated. To reduce the statistical uncertainties, an ant colony algorithm has been implemented to drive the application of splitting and Russian roulette as variance reduction techniques. In this way, the uncertainty has been reduced by a factor of approximately 5, while the efficiency is increased by a factor of above 20. As an application, we have studied the dependence of the response of the pMOS transistor 3N163, used as a dosimeter, with the incidence angle of the radiation for three common photons sources used in radiotherapy: a (60)Co Theratron-780 and the 6 and 18 MV beams produced by a Mevatron KDS LINAC. Experimental and simulated results have been obtained for gantry angles of 0 degrees, 15 degrees, 30 degrees, 45 degrees, 60 degrees and 75 degrees. The agreement obtained has permitted validation of the simulation tool. We have studied how to reduce the angular dependence of the MOSFET response by using an additional encapsulation made of brass in the case of the two LINAC qualities considered. PMID:19794247

  6. An efficient algorithm for fully resolved simulation of freely swimming bodies

    NASA Astrophysics Data System (ADS)

    Shirgaonkar, Anup; Patankar, Neelesh; Maciver, Malcolm

    2007-11-01

    There is a need to better understand the physical principles underlying the extraordinary mobility of swimming and flying animals. To that end, we present a fully resolved simulation scheme for aquatic locomotion that is sufficiently general to potentially function for small flying animals as well. The method combines the rigid particulate scheme of Patankar et al. (IJMF, 2001) with a momentum redistribution scheme to consistently solve for fluid-body forces as well as the swimming velocity. The input to the algorithm is the deforming motion of the fish body or its fins in the frame of reference of the fish. The method is designed to be efficient, parallelizable, and can be easily implemented into existing fluid dynamics codes. We demonstrate that the new method is capable of simulating variety of fish forms including flexible bodies such as an eel, or bodies with flexible fins attached to them such as the blackghost knifefish (Apteronotus albifrons). Insights into the hydrodynamics of aquatic locomotion based on our simulations will be summarized. The proposed technique is also applicable to variety of problems such as designing underwater vehicles, neuromechanical modeling, understanding the role of hydrodynamics on the evolution of fish forms, and animation.

  7. Simulated Performance of Algorithms for the Localization of Radioactive Sources from a Position Sensitive Radiation Detecting System (COCAE)

    SciTech Connect

    Karafasoulis, K.; Zachariadou, K.; Seferlis, S.; Kaissas, I.; Potiriadis, C.; Lambropoulos, C.; Loukas, D.

    2011-12-13

    Simulation studies are presented regarding the performance of algorithms that localize point-like radioactive sources detected by a position sensitive portable radiation instrument (COCAE). The source direction is estimated by using the List Mode Maximum Likelihood Expectation Maximization (LM-ML-EM) imaging algorithm. Furthermore, the source-to-detector distance is evaluated by three different algorithms based on the photo-peak count information of each detecting layer, the quality of the reconstructed source image, and the triangulation method. These algorithms have been tested on a large number of simulated photons over a wide energy range (from 200 keV to 2 MeV) emitted by point-like radioactive sources located at different orientations and source-to-detector distances.

  8. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    NASA Astrophysics Data System (ADS)

    Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.

    2003-02-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  9. Planning image-guided endovascular interventions: guidewire simulation using shortest path algorithms

    NASA Astrophysics Data System (ADS)

    Schafer, Sebastian; Singh, Vikas; Hoffmann, Kenneth R.; Noël, Peter B.; Xu, Jinhui

    2007-03-01

    Endovascular interventional procedures are being used more frequently in cardiovascular surgery. Unfortunately, procedural failure, e.g., vessel dissection, may occur and is often related to improper guidewire and/or device selection. To support the surgeon's decision process and because of the importance of the guidewire in positioning devices, we propose a method to determine the guidewire path prior to insertion using a model of its elastic potential energy coupled with a representative graph construction. The 3D vessel centerline and sizes are determined for a specified vessel. Points in planes perpendicular to the vessel centerline are generated. For each pair of consecutive planes, a vector set is generated which joins all points in these planes. We construct a graph representing these vector sets as nodes. The nodes representing adjacent vector sets are joined by edges with weights calculated as a function of the angle between the corresponding vectors (nodes). The optimal path through this weighted directed graph is then determined using shortest path algorithms, such as topological sort based shortest path algorithm or Dijkstra's algorithm. Volumetric data of an internal carotid artery phantom (Ø 3.5mm) were acquired. Several independent guidewire (Ø 0.4mm) placements were performed, and the 3D paths were determined using rotational angiography. The average RMS distance between the actual and the average simulated guidewire path was 0.7mm; the computation time to determine the path was 3 seconds. The ability to predict the guidewire path inside vessels may facilitate calculation of vessel-branch access and force estimation on devices and the vessel wall.

  10. Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda; Pemmaraju, Surya

    1992-01-01

    Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.

  11. A JFNK-based implicit moment algorithm for self-consistent, multi-scale, plasma simulation

    NASA Astrophysics Data System (ADS)

    Knoll, Dana; Taitano, William; Chacon, Luis

    2010-11-01

    Jacobian-Free-Newton-Krylov method (JFNK) is an advanced non-linear algorithm that allows solution to a coupled systems of non-linear equations [1]. In [2] we have put forward a JFNK-based implicit, consistent, time integration algorithm and demonstrated it's ability to efficiently step over electron time scales, while retaining electron kinetic effects on the ion time scale. Here we extend this work by investigating a JFNK- based implicit-moments approach for the purpose of consistent scale-bridging between the fluid description and kinetic description in order to resolve the transition region. Our preliminary results, based on a reformulated Poisson's equation (RPE) [3], allows solution to the Vlasov-Poisson system for varying grid resolutions. In the limit of local coarse grid size (grid spacing large compared to Debye length), the RPE represents an electric field based on the moment system, while in the limit of local grid spacing resolving the Debye length, the RPE represents an electric field based on the standard Poisson equation. The technique allows smooth transition between the two regimes, consistently, in one simulation. [1] D.A. Knoll and D.E. Keyes,J. Comput. Phys., vol. 193 (2004) [2] W.T. Taitano, Masters Thesis, Nuclear Engineering, University of Idaho (2010) [3] R. Belaouar, N.Crouseilles and P. Degond,J. Sci. Comput., vol. 41 (2009)

  12. Optical Cluster-Finding with an Adaptive Matched-Filter Technique: Algorithm and Comparison with Simulations

    SciTech Connect

    Dong, Feng; Pierpaoli, Elena; Gunn, James E.; Wechsler, Risa H.

    2007-10-29

    We present a modified adaptive matched filter algorithm designed to identify clusters of galaxies in wide-field imaging surveys such as the Sloan Digital Sky Survey. The cluster-finding technique is fully adaptive to imaging surveys with spectroscopic coverage, multicolor photometric redshifts, no redshift information at all, and any combination of these within one survey. It works with high efficiency in multi-band imaging surveys where photometric redshifts can be estimated with well-understood error distributions. Tests of the algorithm on realistic mock SDSS catalogs suggest that the detected sample is {approx} 85% complete and over 90% pure for clusters with masses above 1.0 x 10{sup 14}h{sup -1} M and redshifts up to z = 0.45. The errors of estimated cluster redshifts from maximum likelihood method are shown to be small (typically less that 0.01) over the whole redshift range with photometric redshift errors typical of those found in the Sloan survey. Inside the spherical radius corresponding to a galaxy overdensity of {Delta} = 200, we find the derived cluster richness {Lambda}{sub 200} a roughly linear indicator of its virial mass M{sub 200}, which well recovers the relation between total luminosity and cluster mass of the input simulation.

  13. Improving Efficiency in SMD Simulations Through a Hybrid Differential Relaxation Algorithm.

    PubMed

    Ramírez, Claudia L; Zeida, Ari; Jara, Gabriel E; Roitberg, Adrián E; Martí, Marcelo A

    2014-10-14

    The fundamental object for studying a (bio)chemical reaction obtained from simulations is the free energy profile, which can be directly related to experimentally determined properties. Although quite accurate hybrid quantum (DFT based)-classical methods are available, achieving statistically accurate and well converged results at a moderate computational cost is still an open challenge. Here, we present and thoroughly test a hybrid differential relaxation algorithm (HyDRA), which allows faster equilibration of the classical environment during the nonequilibrium steering of a (bio)chemical reaction. We show and discuss why (in the context of Jarzynski's Relationship) this method allows obtaining accurate free energy profiles with smaller number of independent trajectories and/or faster pulling speeds, thus reducing the overall computational cost. Moreover, due to the availability and straightforward implementation of the method, we expect that it will foster theoretical studies of key enzymatic processes. PMID:26588154

  14. Non-equilibrium molecular dynamics simulation of nanojet injection with adaptive-spatial decomposition parallel algorithm.

    PubMed

    Shin, Hyun-Ho; Yoon, Woong-Sup

    2008-07-01

    An Adaptive-Spatial Decomposition parallel algorithm was developed to increase computation efficiency for molecular dynamics simulations of nano-fluids. Injection of a liquid argon jet with a scale of 17.6 molecular diameters was investigated. A solid annular platinum injector was also solved simultaneously with the liquid injectant by adopting a solid modeling technique which incorporates phantom atoms. The viscous heat was naturally discharged through the solids so the liquid boiling problem was avoided with no separate use of temperature controlling methods. Parametric investigations of injection speed, wall temperature, and injector length were made. A sudden pressure drop at the orifice exit causes flash boiling of the liquid departing the nozzle exit with strong evaporation on the surface of the liquids, while rendering a slender jet. The elevation of the injection speed and the wall temperature causes an activation of the surface evaporation concurrent with reduction in the jet breakup length and the drop size.

  15. Algorithmic Extensions of Low-Dispersion Scheme and Modeling Effects for Acoustic Wave Simulation. Revised

    NASA Technical Reports Server (NTRS)

    Kaushik, Dinesh K.; Baysal, Oktay

    1997-01-01

    Accurate computation of acoustic wave propagation may be more efficiently performed when their dispersion relations are considered. Consequently, computational algorithms which attempt to preserve these relations have been gaining popularity in recent years. In the present paper, the extensions to one such scheme are discussed. By solving the linearized, 2-D Euler and Navier-Stokes equations with such a method for the acoustic wave propagation, several issues were investigated. Among them were higher-order accuracy, choice of boundary conditions and differencing stencils, effects of viscosity, low-storage time integration, generalized curvilinear coordinates, periodic series, their reflections and interference patterns from a flat wall and scattering from a circular cylinder. The results were found to be promising en route to the aeroacoustic simulations of realistic engineering problems.

  16. Computer simulation and evaluation of edge detection algorithms and their application to automatic path selection

    NASA Technical Reports Server (NTRS)

    Longendorfer, B. A.

    1976-01-01

    The construction of an autonomous roving vehicle requires the development of complex data-acquisition and processing systems, which determine the path along which the vehicle travels. Thus, a vehicle must possess algorithms which can (1) reliably detect obstacles by processing sensor data, (2) maintain a constantly updated model of its surroundings, and (3) direct its immediate actions to further a long range plan. The first function consisted of obstacle recognition. Obstacles may be identified by the use of edge detection techniques. Therefore, the Kalman Filter was implemented as part of a large scale computer simulation of the Mars Rover. The second function consisted of modeling the environment. The obstacle must be reconstructed from its edges, and the vast amount of data must be organized in a readily retrievable form. Therefore, a Terrain Modeller was developed which assembled and maintained a rectangular grid map of the planet. The third function consisted of directing the vehicle's actions.

  17. A study of the parallel algorithm for large-scale DC simulation of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel

    Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.

  18. The Ground Flash Fraction Retrieval Algorithm Employing Differential Evolution: Simulations and Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard

    2012-01-01

    The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error

  19. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    NASA Astrophysics Data System (ADS)

    Dong, S.

    2015-02-01

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N ⩾ 2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N - 1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N - 1) strongly-coupled phase field equations for general order parameters into 2 (N - 1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir-de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  20. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    SciTech Connect

    Dong, S.

    2015-02-15

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  1. Efficient spectral and pseudospectral algorithms for 3D simulations of whistler-mode waves in a plasma

    SciTech Connect

    Gumerov, Nail A.; Karavaev, Alexey V.; Surjalal Sharma, A.; Shao Xi; Papadopoulos, Konstantinos D.

    2011-04-01

    Efficient spectral and pseudospectral algorithms for simulation of linear and nonlinear 3D whistler waves in a cold electron plasma are developed. These algorithms are applied to the simulation of whistler waves generated by loop antennas and spheromak-like stationary waves of considerable amplitude. The algorithms are linearly stable and show good stability properties for computations of nonlinear waves over tens of thousands of time steps. Additional speedups by factors of 10-20 (comparing single core CPU and one GPU) are achieved by using graphics processors (GPUs), which enable efficient numerical simulation of the wave propagation on relatively high resolution meshes (tens of millions nodes) in personal computing environment. Comparisons of the numerical results with analytical solutions and experiments show good agreement. The limitations of the codes and the performance of the GPU computing are discussed.

  2. Recent progress of quantum annealing

    SciTech Connect

    Suzuki, Sei

    2015-03-10

    We review the recent progress of quantum annealing. Quantum annealing was proposed as a method to solve generic optimization problems. Recently a Canadian company has drawn a great deal of attention, as it has commercialized a quantum computer based on quantum annealing. Although the performance of quantum annealing is not sufficiently understood, it is likely that quantum annealing will be a practical method both on a conventional computer and on a quantum computer.

  3. A novel coupling of noise reduction algorithms for particle flow simulations

    NASA Astrophysics Data System (ADS)

    Zimoń, M. J.; Reese, J. M.; Emerson, D. R.

    2016-09-01

    Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particle data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.

  4. Dynamic simulation of concentrated macromolecular solutions with screened long-range hydrodynamic interactions: Algorithm and limitations

    PubMed Central

    Ando, Tadashi; Chow, Edmond; Skolnick, Jeffrey

    2013-01-01

    Hydrodynamic interactions exert a critical effect on the dynamics of macromolecules. As the concentration of macromolecules increases, by analogy to the behavior of semidilute polymer solutions or the flow in porous media, one might expect hydrodynamic screening to occur. Hydrodynamic screening would have implications both for the understanding of macromolecular dynamics as well as practical implications for the simulation of concentrated macromolecular solutions, e.g., in cells. Stokesian dynamics (SD) is one of the most accurate methods for simulating the motions of N particles suspended in a viscous fluid at low Reynolds number, in that it considers both far-field and near-field hydrodynamic interactions. This algorithm traditionally involves an O(N3) operation to compute Brownian forces at each time step, although asymptotically faster but more complex SD methods are now available. Motivated by the idea of hydrodynamic screening, the far-field part of the hydrodynamic matrix in SD may be approximated by a diagonal matrix, which is equivalent to assuming that long range hydrodynamic interactions are completely screened. This approximation allows sparse matrix methods to be used, which can reduce the apparent computational scaling to O(N). Previously there were several simulation studies using this approximation for monodisperse suspensions. Here, we employ newly designed preconditioned iterative methods for both the computation of Brownian forces and the solution of linear systems, and consider the validity of this approximation in polydisperse suspensions. We evaluate the accuracy of the diagonal approximation method using an intracellular-like suspension. The diffusivities of particles obtained with this approximation are close to those with the original method. However, this approximation underestimates intermolecular correlated motions, which is a trade-off between accuracy and computing efficiency. The new method makes it possible to perform large-scale and

  5. Golfing with protons: using research grade simulation algorithms for online games

    NASA Astrophysics Data System (ADS)

    Harold, J.

    2004-12-01

    Scientists have long known the power of simulations. By modeling a system in a computer, researchers can experiment at will, developing an intuitive sense of how a system behaves. The rapid increase in the power of personal computers, combined with technologies such as Flash, Shockwave and Java, allow us to bring research simulations into the education world by creating exploratory environments for the public. This approach is illustrated by a project funded by a small grant from NSF's Informal Science Education program, through an opportunity that provides education supplements to existing research awards. Using techniques adapted from a magnetospheric research program, several Flash based interactives have been developed that allow web site visitors to explore the motion of particles in the Earth's magnetosphere. These pieces were folded into a larger Space Weather Center web project at the Space Science Institute (www.spaceweathercenter.org). Rather than presenting these interactives as plasma simulations per se, the research algorithms were used to create games such as "Magneto Mini Golf", where the balls are protons moving in combined electric and magnetic fields. The "holes" increase in complexity, beginning with no fields and progressing towards a simple model of Earth's magnetosphere. The emphasis of the activity is gameplay, but because it is at its core a plasma simulation, the user develops an intuitive sense of charged particle motion as they progress. Meanwhile, the pieces contain embedded assessments that are measurable through a database driven tracking system. Mining that database not only provides helpful usability information, but allows us to examine whether users are meeting the learning goals of the activities. We will discuss the development and evaluation results of the project, as well as the potential for these types of activities to shift the expectations of what a web site can and should provide educationally.

  6. Unraveling Quantum Annealers using Classical Hardness.

    PubMed

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as 'D-Wave' chips, promise to solve practical optimization problems potentially faster than conventional 'classical' computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize 'temperature chaos' as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  7. Unraveling Quantum Annealers using Classical Hardness.

    PubMed

    Martin-Mayor, Victor; Hen, Itay

    2015-10-20

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as 'D-Wave' chips, promise to solve practical optimization problems potentially faster than conventional 'classical' computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize 'temperature chaos' as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip.

  8. An algorithm for emulsion stability simulations: account of flocculation, coalescence, surfactant adsorption and the process of Ostwald ripening.

    PubMed

    Urbina-Villalba, German

    2009-03-01

    The first algorithm for Emulsion Stability Simulations (ESS) was presented at the V Conferencia Iberoamericana sobre Equilibrio de Fases y Diseño de Procesos [Luis, J.; García-Sucre, M.; Urbina-Villalba, G. Brownian Dynamics Simulation of Emulsion Stability In: Equifase 99. Libro de Actas, 1(st) Ed., Tojo J., Arce, A., Eds.; Solucion's: Vigo, Spain, 1999; Volume 2, pp. 364-369]. The former version of the program consisted on a minor modification of the Brownian Dynamics algorithm to account for the coalescence of drops. The present version of the program contains elaborate routines for time-dependent surfactant adsorption, average diffusion constants, and Ostwald ripening.

  9. Real-Time Simulation for Verification and Validation of Diagnostic and Prognostic Algorithms

    NASA Technical Reports Server (NTRS)

    Aguilar, Robet; Luu, Chuong; Santi, Louis M.; Sowers, T. Shane

    2005-01-01

    To verify that a health management system (HMS) performs as expected, a virtual system simulation capability, including interaction with the associated platform or vehicle, very likely will need to be developed. The rationale for developing this capability is discussed and includes the limited capability to seed faults into the actual target system due to the risk of potential damage to high value hardware. The capability envisioned would accurately reproduce the propagation of a fault or failure as observed by sensors located at strategic locations on and around the target system and would also accurately reproduce the control system and vehicle response. In this way, HMS operation can be exercised over a broad range of conditions to verify that it meets requirements for accurate, timely response to actual faults with adequate margin against false and missed detections. An overview is also presented of a real-time rocket propulsion health management system laboratory which is available for future rocket engine programs. The health management elements and approaches of this lab are directly applicable for future space systems. In this paper the various components are discussed and the general fault detection, diagnosis, isolation and the response (FDIR) concept is presented. Additionally, the complexities of V&V (Verification and Validation) for advanced algorithms and the simulation capabilities required to meet the changing state-of-the-art in HMS are discussed.

  10. Documenting the NASA Armstrong Flight Research Center Oblate Earth Simulation Equations of Motion and Integration Algorithm

    NASA Technical Reports Server (NTRS)

    Clarke, R.; Lintereur, L.; Bahm, C.

    2016-01-01

    A desire for more complete documentation of the National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC), Edwards, California legacy code used in the core simulation has led to this e ort to fully document the oblate Earth six-degree-of-freedom equations of motion and integration algorithm. The authors of this report have taken much of the earlier work of the simulation engineering group and used it as a jumping-o point for this report. The largest addition this report makes is that each element of the equations of motion is traced back to first principles and at no point is the reader forced to take an equation on faith alone. There are no discoveries of previously unknown principles contained in this report; this report is a collection and presentation of textbook principles. The value of this report is that those textbook principles are herein documented in standard nomenclature that matches the form of the computer code DERIVC. Previous handwritten notes are much of the backbone of this work, however, in almost every area, derivations are explicitly shown to assure the reader that the equations which make up the oblate Earth version of the computer routine, DERIVC, are correct.

  11. Circuit model of the ITER-like antenna for JET and simulation of its control algorithms

    NASA Astrophysics Data System (ADS)

    Durodié, Frédéric; Dumortier, Pierre; Helou, Walid; Křivská, Alena; Lerche, Ernesto

    2015-12-01

    The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and

  12. Circuit model of the ITER-like antenna for JET and simulation of its control algorithms

    SciTech Connect

    Durodié, Frédéric Křivská, Alena; Helou, Walid; Collaboration: EUROfusion Consortium

    2015-12-10

    The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and

  13. An evaluation of the performance of the soil temperature simulation algorithms used in the PRZM model.

    PubMed

    Tsiros, I X; Dimopoulos, I F

    2007-04-01

    Soil temperature simulation is an important component in environmental modeling since it is involved in several aspects of pollutant transport and fate. This paper deals with the performance of the soil temperature simulation algorithms of the well-known environmental model PRZM. Model results are compared and evaluated based on the basis of its ability to predict in situ measured soil temperature profiles in an experimental plot during a 3-year monitoring study. The evaluation of the performance is based on linear regression statistics and typical model statistical errors such as the root mean square error (RMSE) and the normalized objective function (NOF). Results show that the model required minimal calibration to match the observed response of the system. Values of the determination coefficient R(2) were found to be in all cases around the value of 0.98 indicating a very good agreement between measured and simulated data. Values of the RMSE were found to be in the range of 1.2 to 1.4 degrees C, 1.1 to 1.4 degrees C, 0.9 to 1.1 degrees C, and 0.8 to 1.1 degrees C, for the examined 2, 5, 10 and 20 cm soil depths, respectively. Sensitivity analyses were also performed to investigate the influence of various factors involved in the energy balance equation at the ground surface on the soil temperature profiles. The results showed that the model was able to represent important processes affecting the soil temperature regime such as the combined effect of the heat transfer by convection between the ground surface and the atmosphere and the latent heat flux due to soil water evaporation. PMID:17454373

  14. Inrush Current Simulation of Power Transformer using Machine Parameters Estimated by Design Procedure of Winding Structure and Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Tokunaga, Yoshitaka

    This paper presents estimation techniques of machine parameters for power transformer using design procedure of transformer and genetic algorithm with real coding. Especially, it is very difficult to obtain machine parameters for transformers in customers' facilities. Using estimation techniques, machine parameters could be calculated from the only nameplate data of these transformers. Subsequently, EMTP-ATP simulation of the inrush current was carried out using machine parameters estimated by techniques developed in this study and simulation results were reproduced measured waveforms.

  15. Developpement d'algorithmes paralleles pour la simulation d'ecoulements de fluides dans les milieux poreux

    NASA Astrophysics Data System (ADS)

    Vidal, David Jean-Emmanuel

    Two different parallel lattice Boltzmann (LBM) algorithms have been devised for the simulation of flow through complex porous media. They are based on memory efficient LBM algorithms, namely the one-lattice and shift algorithms, combined with vector data structure, even fluid node vector partitioning domain decomposition and efficient data transfer layouts. The shift implementation also includes a single unit relaxation scheme that allows additional memory savings, but limits its validity to Newtonian fluids. They both provide high parallel performance by balancing the workload among the processors and reducing the amount of data that need to be transferred, and reduce significantly the memory usage as compared to previous parallel LBM codes presented in the literature. Theoretical parallel performance and memory usage models developed show that they also offer a good evolutivity and efficiencies as high as 79% for simulations made of several billions of fluid nodes on 128 processors are reported. The application of one of these algorithms for the simulation of flow through compressed packings made of highly polydisperse spheres has demonstrated the remarkable precision and efficiency of the algorithm proposed. As a result, a modified Carman-Kozeny correlation taking into account the compression level and the particle polydispersity has been formulated.

  16. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    SciTech Connect

    Nazareth, D; Spaans, J

    2014-06-15

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objective function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.

  17. Evaluation and optimization of lidar temperature analysis algorithms using simulated data

    NASA Technical Reports Server (NTRS)

    Leblanc, Thierry; McDermid, I. Stuart; Hauchecorne, Alain; Keckhut, Philippe

    1998-01-01

    The middle atmosphere (20 to 90 km altitude) ha received increasing interest from the scientific community during the last decades, especially since such problems as polar ozone depletion and climatic change have become so important. Temperature profiles have been obtained in this region using a variety of satellite-, rocket-, and balloon-borne instruments as well as some ground-based systems. One of the more promising of these instruments, especially for long-term high resolution measurements, is the lidar. Measurements of laser radiation Rayleigh backscattered, or Raman scattered, by atmospheric air molecules can be used to determine the relative air density profile and subsequently the temperature profile if it is assumed that the atmosphere is in hydrostatic equilibrium and follows the ideal gas law. The high vertical and spatial resolution make the lidar a well adapted instrument for the study of many middle atmospheric processes and phenomena as well as for the evaluation and validation of temperature measurements from satellites, such as the Upper Atmosphere Research Satellite (UARS). In the Network for Detection of Stratospheric Change (NDSC) lidar is the core instrument for measuring middle atmosphere temperature profiles. Using the best lidar analysis algorithm possible is therefore of crucial importance. In this work, the JPL and CNRS/SA lidar analysis software were evaluated. The results of this evaluation allowed the programs to be corrected and optimized and new production software versions were produced. First, a brief description of the lidar technique and the method used to simulate lidar raw-data profiles from a given temperature profile is presented. Evaluation and optimization of the JPL and CNRS/SA algorithms are then discussed.

  18. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations

    PubMed Central

    Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji

    2015-01-01

    GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008

  19. Blind decorrelation and deconvolution algorithm for multiple-input multiple-output system: II. Analysis and simulation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ching; Yu, Tommy; Yao, Kung; Pottie, Gregory J.

    1999-11-01

    For single-input multiple-output (SIMO) systems blind deconvolution based on second-order statistics has been shown promising given that the sources and channels meet certain assumptions. In our previous paper we extend the work to multiple-input multiple-output (MIMO) systems by introducing a blind deconvolution algorithm to remove all channel dispersion followed by a blind decorrelation algorithm to separate different sources from their instantaneous mixture. In this paper we first explore more details embedded in our algorithm. Then we present simulation results to show that our algorithm is applicable to MIMO systems excited by a broad class of signals such as speech, music and digitally modulated symbols.

  20. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    SciTech Connect

    Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.

  1. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    SciTech Connect

    Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen Martin; Tucker, Garritt J.

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers

  2. Performance simulation of a combustion engine charged by a variable geometry turbocharger. I - Prerequirements, boundary conditions and model development. II - Simulation algorithm, computed results

    NASA Astrophysics Data System (ADS)

    Malobabic, M.; Buttschardt, W.; Rautenberg, M.

    The paper presents a theoretical derivation of the relationship between a variable geometry turbocharger and the combustion engine, using simplified boundary conditions and model restraints and taking into account the combustion process itself as well as the nonadiabatic operating conditions for the turbine and the compressor. The simulation algorithm is described, and the results computed using this algorithm are compared with measurements performed on a test engine in combination with a controllable turbocharger with adjustable turbine inlet guide vanes. In addition, the results of theoretical parameter studies are presented, which include the simulation of a given turbocharger with variable geometry in combination with different sized combustion engines and the simulation of different sized variable-geometry turbochargers in combination with a given combustion engine.

  3. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    NASA Astrophysics Data System (ADS)

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sébastian, P.

    2010-06-01

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM® and Samcef® softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  4. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    SciTech Connect

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-06-15

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  5. Initial Evaluations of LoC Prediction Algorithms Using the NASA Vertical Motion Simulator

    NASA Technical Reports Server (NTRS)

    Krishnakumar, Kalmanje; Stepanyan, Vahram; Barlow, Jonathan; Hardy, Gordon; Dorais, Greg; Poolla, Chaitanya; Reardon, Scott; Soloway, Donald

    2014-01-01

    Flying near the edge of the safe operating envelope is an inherently unsafe proposition. Edge of the envelope here implies that small changes or disturbances in system state or system dynamics can take the system out of the safe envelope in a short time and could result in loss-of-control events. This study evaluated approaches to predicting loss-of-control safety margins as the aircraft gets closer to the edge of the safe operating envelope. The goal of the approach is to provide the pilot aural, visual, and tactile cues focused on maintaining the pilot's control action within predicted loss-of-control boundaries. Our predictive architecture combines quantitative loss-of-control boundaries, an adaptive prediction method to estimate in real-time Markov model parameters and associated stability margins, and a real-time data-based predictive control margins estimation algorithm. The combined architecture is applied to a nonlinear transport class aircraft. Evaluations of various feedback cues using both test and commercial pilots in the NASA Ames Vertical Motion-base Simulator (VMS) were conducted in the summer of 2013. The paper presents results of this evaluation focused on effectiveness of these approaches and the cues in preventing the pilots from entering a loss-of-control event.

  6. Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.

    PubMed

    Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F

    2016-01-01

    In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed.

  7. Simulation of an algorithm for determining the reliability of unmanned ground vehicle networks

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Dixit, Arati M.; Saab, Kassem; Gerhart, Grant R.

    2009-09-01

    There is an increasing interest in the army of small unmanned robots taking part in defense operations. It is considered important to predict the reliability of the group of robots taking part in different operations. A group of robots have both coordination and collaboration. The robot operations are considered as a network graph whose system reliability can be determined with the help of different techniques. Once a specified reliability is achieved the commander controlling the operation can take appropriate action. This paper gives a simulation which can determine the system reliability of the robotic systems having collaboration and coordination. The procedure developed is based on binary decision diagrams to obtain a disjoint Boolean expression. The procedure is applicable for any number of nodes and the branches. For illustration purposes reliability of simple circuits like series network, parallel network, series-parallel and non-series parallel network are illustrated. It is hoped that more work in this area will lead to the development of algorithms which will be ultimately used for a real time environment.

  8. Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.

    PubMed

    Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F

    2016-01-01

    In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed. PMID:27386338

  9. Development and validation of a linear recursive "Order- N" algorithm for the simulation of flexible space manipulator dynamics

    NASA Astrophysics Data System (ADS)

    Van Woerkom, P. Th. L. M.; de Boer, A.

    1995-01-01

    Robotic manipulators designed to operate on-board spacecraft and Space Stations are characterized by large spatial dimensions. The structural flexibility inherent in such manipulators introduces a noticeable and undesirable modification of the traditional rigid-body manipulator dynamics. As a result, the dynamics of the complete system comprising a flexible spacecraft or Space Station as a manipulator base, and an attached flexible manipulator, are also modified. Operational requirements related to high manoeuvre accuracy and modest manoeuvre duration, create the need for careful modelling and simulation of the dynamics of such systems. The objective of this paper is to outline the development and validation of an advanced algorithm for the simulation of the dynamics of such flexible spacecraft/space manipulator systems. The requirements imposed during the development of the present prototype dynamics simulator led to the modification and implementation of an existing linear recursive algorithm ("Order- N" algorithm), which requires a computational effort proportional to the number of component bodies in the system. Starting with the Lagrange form of the d'Alembert principle, we first deduce a parametric form which is found to yield—amongst others—the basic forms of the Newton-Euler, the d'Alembert and the Gauss dynamics principles. It is then shown how the application of each of the latter three principles can be made to lead graciously to the desired Order- N algorithm for the flexible multi-body system. The Order- N algorithm thus obtained and validated analytically, forms the basis for the prototype simulator REALDYN, designed to permit numerical simulation of the algorithm on UNIX workstations. Verification, numerical integration and further validation tests have been carried out. Some of the results obtained during the validation exercises could not be explained readily, even in the case of simple multi-body systems. The use of test tools and physical

  10. Simulation of rat behavior by a reinforcement learning algorithm in consideration of appearance probabilities of reinforcement signals.

    PubMed

    Murakoshi, Kazushi; Noguchi, Takuya

    2005-04-01

    Brown and Wanger [Brown, R.T., Wanger, A.R., 1964. Resistance to punishment and extinction following training with shock or nonreinforcement. J. Exp. Psychol. 68, 503-507] investigated rat behaviors with the following features: (1) rats were exposed to reward and punishment at the same time, (2) environment changed and rats relearned, and (3) rats were stochastically exposed to reward and punishment. The results are that exposure to nonreinforcement produces resistance to the decremental effects of behavior after stochastic reward schedule and that exposure to both punishment and reinforcement produces resistance to the decremental effects of behavior after stochastic punishment schedule. This paper aims to simulate the rat behaviors by a reinforcement learning algorithm in consideration of appearance probabilities of reinforcement signals. The former algorithms of reinforcement learning were unable to simulate the behavior of the feature (3). We improve the former reinforcement learning algorithms by controlling learning parameters in consideration of the acquisition probabilities of reinforcement signals. The proposed algorithm qualitatively simulates the result of the animal experiment of Brown and Wanger. PMID:15740837

  11. Simulation of rat behavior by a reinforcement learning algorithm in consideration of appearance probabilities of reinforcement signals.

    PubMed

    Murakoshi, Kazushi; Noguchi, Takuya

    2005-04-01

    Brown and Wanger [Brown, R.T., Wanger, A.R., 1964. Resistance to punishment and extinction following training with shock or nonreinforcement. J. Exp. Psychol. 68, 503-507] investigated rat behaviors with the following features: (1) rats were exposed to reward and punishment at the same time, (2) environment changed and rats relearned, and (3) rats were stochastically exposed to reward and punishment. The results are that exposure to nonreinforcement produces resistance to the decremental effects of behavior after stochastic reward schedule and that exposure to both punishment and reinforcement produces resistance to the decremental effects of behavior after stochastic punishment schedule. This paper aims to simulate the rat behaviors by a reinforcement learning algorithm in consideration of appearance probabilities of reinforcement signals. The former algorithms of reinforcement learning were unable to simulate the behavior of the feature (3). We improve the former reinforcement learning algorithms by controlling learning parameters in consideration of the acquisition probabilities of reinforcement signals. The proposed algorithm qualitatively simulates the result of the animal experiment of Brown and Wanger.

  12. Simulation of heat waves in climate models using large deviation algorithms

    NASA Astrophysics Data System (ADS)

    Ragone, Francesco; Bouchet, Freddy; Wouters, Jeroen

    2016-04-01

    One of the goals of climate science is to characterize the statistics of extreme, potentially dangerous events (e.g. exceptionally intense precipitations, wind gusts, heat waves) in the present and future climate. The study of extremes is however hindered by both a lack of past observational data for events with a return time larger than decades or centuries, and by the large computational cost required to perform a proper sampling of extreme statistics with state of the art climate models. The study of the dynamics leading to extreme events is especially difficult as it requires hundreds or thousands of realizations of the dynamical paths leading to similar extremes. We will discuss here a new numerical algorithm, based on large deviation theory, that allows to efficiently sample very rare events in complex climate models. A large ensemble of realizations are run in parallel, and selection and cloning procedures are applied in order to oversample the trajectories leading to the extremes of interest. The statistics and characteristic dynamics of the extremes can then be computed on a much larger sample of events. This kind of importance sampling method belongs to a class of genetic algorithms that have been successfully applied in other scientific fields (statistical mechanics, complex biomolecular dynamics), allowing to decrease by orders of magnitude the numerical cost required to sample extremes with respect to standard direct numerical sampling. We study the applicability of this method to the computation of the statistics of European surface temperatures with the Planet Simulator (Plasim), an intermediate complexity general circulation model of the atmosphere. We demonstrate the efficiency of the method by comparing its performances against standard approaches. Dynamical paths leading to heat waves are studied, enlightening the relation of Plasim heat waves with blocking events, and the dynamics leading to these events. We then discuss the feasibility of this

  13. Finite-Difference Algorithm for Simulating 3D Electromagnetic Wavefields in Conductive Media

    NASA Astrophysics Data System (ADS)

    Aldridge, D. F.; Bartel, L. C.; Knox, H. A.

    2013-12-01

    Electromagnetic (EM) wavefields are routinely used in geophysical exploration for detection and characterization of subsurface geological formations of economic interest. Recorded EM signals depend strongly on the current conductivity of geologic media. Hence, they are particularly useful for inferring fluid content of saturated porous bodies. In order to enhance understanding of field-recorded data, we are developing a numerical algorithm for simulating three-dimensional (3D) EM wave propagation and diffusion in heterogeneous conductive materials. Maxwell's equations are combined with isotropic constitutive relations to obtain a set of six, coupled, first-order partial differential equations governing the electric and magnetic vectors. An advantage of this system is that it does not contain spatial derivatives of the three medium parameters electric permittivity, magnetic permeability, and current conductivity. Numerical solution methodology consists of explicit, time-domain finite-differencing on a 3D staggered rectangular grid. Temporal and spatial FD operators have order 2 and N, where N is user-selectable. We use an artificially-large electric permittivity to maximize the FD timestep, and thus reduce execution time. For the low frequencies typically used in geophysical exploration, accuracy is not unduly compromised. Grid boundary reflections are mitigated via convolutional perfectly matched layers (C-PMLs) imposed at the six grid flanks. A shared-memory-parallel code implementation via OpenMP directives enables rapid algorithm execution on a multi-thread computational platform. Good agreement is obtained in comparisons of numerically-generated data with reference solutions. EM wavefields are sourced via point current density and magnetic dipole vectors. Spatially-extended inductive sources (current carrying wire loops) are under development. We are particularly interested in accurate representation of high-conductivity sub-grid-scale features that are common

  14. Accelerating dissipative particle dynamics simulations on GPUs: Algorithms, numerics and applications

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Karniadakis, George Em

    2014-11-01

    We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been

  15. A Physics-based Algorithm for Real-time Simulation of Electrosurgery Procedures in Minimally Invasive Surgery

    PubMed Central

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F.; De, Suvranu

    2014-01-01

    Background High-frequency electricity is used in a majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. Methods We present a real-time and physically realistic simulation of electrosurgery, by modeling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide sub-finite-element graphical rendering of vaporized tissue, a dual mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. Results We have demonstrated our physics based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Conclusions Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. PMID:24357156

  16. Forward-Masked Frequency Selectivity Improvements in Simulated and Actual Cochlear Implant Users Using a Preprocessing Algorithm

    PubMed Central

    Jürgens, Tim

    2016-01-01

    Frequency selectivity can be quantified using masking paradigms, such as psychophysical tuning curves (PTCs). Normal-hearing (NH) listeners show sharp PTCs that are level- and frequency-dependent, whereas frequency selectivity is strongly reduced in cochlear implant (CI) users. This study aims at (a) assessing individual shapes of PTCs in CI users, (b) comparing these shapes to those of simulated CI listeners (NH listeners hearing through a CI simulation), and (c) increasing the sharpness of PTCs using a biologically inspired dynamic compression algorithm, BioAid, which has been shown to sharpen the PTC shape in hearing-impaired listeners. A three-alternative-forced-choice forward-masking technique was used to assess PTCs in 8 CI users (with their own speech processor) and 11 NH listeners (with and without listening through a vocoder to simulate electric hearing). CI users showed flat PTCs with large interindividual variability in shape, whereas simulated CI listeners had PTCs of the same average flatness, but more homogeneous shapes across listeners. The algorithm BioAid was used to process the stimuli before entering the CI users’ speech processor or the vocoder simulation. This algorithm was able to partially restore frequency selectivity in both groups, particularly in seven out of eight CI users, meaning significantly sharper PTCs than in the unprocessed condition. The results indicate that algorithms can improve the large-scale sharpness of frequency selectivity in some CI users. This finding may be useful for the design of sound coding strategies particularly for situations in which high frequency selectivity is desired, such as for music perception. PMID:27604785

  17. Forward-Masked Frequency Selectivity Improvements in Simulated and Actual Cochlear Implant Users Using a Preprocessing Algorithm.

    PubMed

    Langner, Florian; Jürgens, Tim

    2016-01-01

    Frequency selectivity can be quantified using masking paradigms, such as psychophysical tuning curves (PTCs). Normal-hearing (NH) listeners show sharp PTCs that are level- and frequency-dependent, whereas frequency selectivity is strongly reduced in cochlear implant (CI) users. This study aims at (a) assessing individual shapes of PTCs in CI users, (b) comparing these shapes to those of simulated CI listeners (NH listeners hearing through a CI simulation), and (c) increasing the sharpness of PTCs using a biologically inspired dynamic compression algorithm, BioAid, which has been shown to sharpen the PTC shape in hearing-impaired listeners. A three-alternative-forced-choice forward-masking technique was used to assess PTCs in 8 CI users (with their own speech processor) and 11 NH listeners (with and without listening through a vocoder to simulate electric hearing). CI users showed flat PTCs with large interindividual variability in shape, whereas simulated CI listeners had PTCs of the same average flatness, but more homogeneous shapes across listeners. The algorithm BioAid was used to process the stimuli before entering the CI users' speech processor or the vocoder simulation. This algorithm was able to partially restore frequency selectivity in both groups, particularly in seven out of eight CI users, meaning significantly sharper PTCs than in the unprocessed condition. The results indicate that algorithms can improve the large-scale sharpness of frequency selectivity in some CI users. This finding may be useful for the design of sound coding strategies particularly for situations in which high frequency selectivity is desired, such as for music perception. PMID:27604785

  18. A combined Event-Driven/Time-Driven molecular dynamics algorithm for the simulation of shock waves in rarefied gases

    NASA Astrophysics Data System (ADS)

    Valentini, Paolo; Schwartzentruber, Thomas E.

    2009-12-01

    A novel combined Event-Driven/Time-Driven (ED/TD) algorithm to speed-up the Molecular Dynamics simulation of rarefied gases using realistic spherically symmetric soft potentials is presented. Due to the low density regime, the proposed method correctly identifies the time that must elapse before the next interaction occurs, similarly to Event-Driven Molecular Dynamics. However, each interaction is treated using Time-Driven Molecular Dynamics, thereby integrating Newton's Second Law using the sufficiently small time step needed to correctly resolve the atomic motion. Although infrequent, many-body interactions are also accounted for with a small approximation. The combined ED/TD method is shown to correctly reproduce translational relaxation in argon, described using the Lennard-Jones potential. For densities between ρ=10-4 kg/m and ρ=10-1 kg/m, comparisons with kinetic theory, Direct Simulation Monte Carlo, and pure Time-Driven Molecular Dynamics demonstrate that the ED/TD algorithm correctly reproduces the proper collision rates and the evolution toward thermal equilibrium. Finally, the combined ED/TD algorithm is applied to the simulation of a Mach 9 shock wave in rarefied argon. Density and temperature profiles as well as molecular velocity distributions accurately match DSMC results, and the shock thickness is within the experimental uncertainty. For the problems considered, the ED/TD algorithm ranged from several hundred to several thousand times faster than conventional Time-Driven MD. Moreover, the force calculation to integrate the molecular trajectories is found to contribute a negligible amount to the overall ED/TD simulation time. Therefore, this method could pave the way for the application of much more refined and expensive interatomic potentials, either classical or first-principles, to Molecular Dynamics simulations of shock waves in rarefied gases, involving vibrational nonequilibrium and chemical reactivity.

  19. Forward-Masked Frequency Selectivity Improvements in Simulated and Actual Cochlear Implant Users Using a Preprocessing Algorithm.

    PubMed

    Langner, Florian; Jürgens, Tim

    2016-09-07

    Frequency selectivity can be quantified using masking paradigms, such as psychophysical tuning curves (PTCs). Normal-hearing (NH) listeners show sharp PTCs that are level- and frequency-dependent, whereas frequency selectivity is strongly reduced in cochlear implant (CI) users. This study aims at (a) assessing individual shapes of PTCs in CI users, (b) comparing these shapes to those of simulated CI listeners (NH listeners hearing through a CI simulation), and (c) increasing the sharpness of PTCs using a biologically inspired dynamic compression algorithm, BioAid, which has been shown to sharpen the PTC shape in hearing-impaired listeners. A three-alternative-forced-choice forward-masking technique was used to assess PTCs in 8 CI users (with their own speech processor) and 11 NH listeners (with and without listening through a vocoder to simulate electric hearing). CI users showed flat PTCs with large interindividual variability in shape, whereas simulated CI listeners had PTCs of the same average flatness, but more homogeneous shapes across listeners. The algorithm BioAid was used to process the stimuli before entering the CI users' speech processor or the vocoder simulation. This algorithm was able to partially restore frequency selectivity in both groups, particularly in seven out of eight CI users, meaning significantly sharper PTCs than in the unprocessed condition. The results indicate that algorithms can improve the large-scale sharpness of frequency selectivity in some CI users. This finding may be useful for the design of sound coding strategies particularly for situations in which high frequency selectivity is desired, such as for music perception.

  20. A combined Event-Driven/Time-Driven molecular dynamics algorithm for the simulation of shock waves in rarefied gases

    SciTech Connect

    Valentini, Paolo Schwartzentruber, Thomas E.

    2009-12-10

    A novel combined Event-Driven/Time-Driven (ED/TD) algorithm to speed-up the Molecular Dynamics simulation of rarefied gases using realistic spherically symmetric soft potentials is presented. Due to the low density regime, the proposed method correctly identifies the time that must elapse before the next interaction occurs, similarly to Event-Driven Molecular Dynamics. However, each interaction is treated using Time-Driven Molecular Dynamics, thereby integrating Newton's Second Law using the sufficiently small time step needed to correctly resolve the atomic motion. Although infrequent, many-body interactions are also accounted for with a small approximation. The combined ED/TD method is shown to correctly reproduce translational relaxation in argon, described using the Lennard-Jones potential. For densities between {rho}=10{sup -4}kg/m{sup 3} and {rho}=10{sup -1}kg/m{sup 3}, comparisons with kinetic theory, Direct Simulation Monte Carlo, and pure Time-Driven Molecular Dynamics demonstrate that the ED/TD algorithm correctly reproduces the proper collision rates and the evolution toward thermal equilibrium. Finally, the combined ED/TD algorithm is applied to the simulation of a Mach 9 shock wave in rarefied argon. Density and temperature profiles as well as molecular velocity distributions accurately match DSMC results, and the shock thickness is within the experimental uncertainty. For the problems considered, the ED/TD algorithm ranged from several hundred to several thousand times faster than conventional Time-Driven MD. Moreover, the force calculation to integrate the molecular trajectories is found to contribute a negligible amount to the overall ED/TD simulation time. Therefore, this method could pave the way for the application of much more refined and expensive interatomic potentials, either classical or first-principles, to Molecular Dynamics simulations of shock waves in rarefied gases, involving vibrational nonequilibrium and chemical reactivity.