Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
Stochastic search in structural optimization - Genetic algorithms and simulated annealing
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1993-01-01
An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650
Evolutionary algorithms, simulated annealing, and Tabu search: a comparative study
NASA Astrophysics Data System (ADS)
Youssef, Habib; Sait, Sadiq M.; Adiche, Hakim
1998-10-01
Evolutionary algorithms, simulated annealing (SA), and Tabu Search (TS) are general iterative algorithms for combinatorial optimization. The term evolutionary algorithm is used to refer to any probabilistic algorithm whose design is inspired by evolutionary mechanisms found in biological species. Most widely known algorithms of this category are Genetic Algorithms (GA). GA, SA, and TS have been found to be very effective and robust in solving numerous problems from a wide range of application domains.Furthermore, they are even suitable for ill-posed problems where some of the parameters are not known before hand. These properties are lacking in all traditional optimization techniques. In this paper we perform a comparative study among GA, SA, and TS. These algorithms have many similarities, but they also possess distinctive features, mainly in their strategies for searching the solution state space. the three heuristics are applied on the same optimization problem and compared with respect to (1) quality of the best solution identified by each heuristic, (2) progress of the search from initial solution(s) until stopping criteria are met, (3) the progress of the cost of the best solution as a function of time, and (4) the number of solutions found at successive intervals of the cost function. The benchmark problem was is the floorplanning of very large scale integrated circuits. This is a hard multi-criteria optimization problem. Fuzzy logic is used to combine all objective criteria into a single fuzzy evaluation function, which is then used to rate competing solutions.
Application of Simulated Annealing and Related Algorithms to TWTA Design
NASA Technical Reports Server (NTRS)
Radke, Eric M.
2004-01-01
Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is
Simulated annealing versus quantum annealing
NASA Astrophysics Data System (ADS)
Troyer, Matthias
Based on simulated classical annealing and simulated quantum annealing using quantum Monte Carlo (QMC) simulations I will explore the question where physical or simulated quantum annealers may outperform classical optimization algorithms. Although the stochastic dynamics of QMC simulations is not the same as the unitary dynamics of a quantum system, I will first show that for the problem of quantum tunneling between two local minima both QMC simulations and a physical system exhibit the same scaling of tunneling times with barrier height. The scaling in both cases is O (Δ2) , where Δ is the tunneling splitting. An important consequence is that QMC simulations can be used to predict the performance of a quantum annealer for tunneling through a barrier. Furthermore, by using open instead of periodic boundary conditions in imaginary time, equivalent to a projector QMC algorithm, one obtains a quadratic speedup for QMC, and achieve linear scaling in Δ. I will then address the apparent contradiction between experiments on a D-Wave 2 system that failed to see evidence of quantum speedup and previous QMC results that indicated an advantage of quantum annealing over classical annealing for spin glasses. We find that this contradiction is resolved by taking the continuous time limit in the QMC simulations which then agree with the experimentally observed behavior and show no speedup for 2D spin glasses. However, QMC simulations with large time steps gain further advantage: they ``cheat'' by ignoring what happens during a (large) time step, and can thus outperform both simulated quantum annealers and classical annealers. I will then address the question how to optimally run a simulated or physical quantum annealer. Investigating the behavior of the tails of the distribution of runtimes for very hard instances we find that adiabatically slow annealing is far from optimal. On the contrary, many repeated relatively fast annealing runs can be orders of magnitude faster for
Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua
2014-03-01
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm. PMID:24697395
Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao
2014-03-15
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua
2014-03-01
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel
2006-12-15
Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented.
Registration of range data using a hybrid simulated annealing and iterative closest point algorithm
LUCK,JASON; LITTLE,CHARLES Q.; HOFF,WILLIAM
2000-04-17
The need to register data is abundant in applications such as: world modeling, part inspection and manufacturing, object recognition, pose estimation, robotic navigation, and reverse engineering. Registration occurs by aligning the regions that are common to multiple images. The largest difficulty in performing this registration is dealing with outliers and local minima while remaining efficient. A commonly used technique, iterative closest point, is efficient but is unable to deal with outliers or avoid local minima. Another commonly used optimization algorithm, simulated annealing, is effective at dealing with local minima but is very slow. Therefore, the algorithm developed in this paper is a hybrid algorithm that combines the speed of iterative closest point with the robustness of simulated annealing. Additionally, a robust error function is incorporated to deal with outliers. This algorithm is incorporated into a complete modeling system that inputs two sets of range data, registers the sets, and outputs a composite model.
NASA Astrophysics Data System (ADS)
Zhu, Jiulong; Wang, Shijun
Presently water resource in most watersheds in China is distributed in terms of administrative instructions. This kind of allocation method has many disadvantages and hampers the instructional effect of market mechanism on water allocation. The paper studies South-to-North Water Transfer Project and discusses water allocation of the node lakes along the Project. Firstly, it advanced four assumptions. Secondly, it analyzed constraint conditions of water allocation in terms of present state of water allocation in China. Thirdly, it established a goal model of water allocation and set up a systematic model from the angle of comprehensive profits of water utilization and profits of the node lakes. Fourthly, it discussed calculation method of the model by means of Simulated Annealing Hybrid Genetic Algorithm (SHGA). Finally, it validated the rationality and validity of the model by a simulation testing.
Experiences with serial and parallel algorithms for channel routing using simulated annealing
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1988-01-01
Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.
A parallel simulated annealing algorithm for standard cell placement on a hypercube computer
NASA Technical Reports Server (NTRS)
Jones, Mark Howard
1987-01-01
A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.
A Simulated Annealing Algorithm for the Optimization of Multistage Depressed Collector Efficiency
NASA Technical Reports Server (NTRS)
Vaden, Karl R.; Wilson, Jeffrey D.; Bulson, Brian A.
2002-01-01
The microwave traveling wave tube amplifier (TWTA) is widely used as a high-power transmitting source for space and airborne communications. One critical factor in designing a TWTA is the overall efficiency. However, overall efficiency is highly dependent upon collector efficiency; so collector design is critical to the performance of a TWTA. Therefore, NASA Glenn Research Center has developed an optimization algorithm based on Simulated Annealing to quickly design highly efficient multi-stage depressed collectors (MDC).
NASA Astrophysics Data System (ADS)
Pereira, Ana I.; Lima, José; Costa, Paulo
2013-10-01
There are several approaches to create the Humanoid robot gait planning. This problem presents a large number of unknown parameters that should be found to make the humanoid robot to walk. Optimization in simulation models can be used to find the gait based on several criteria such as energy minimization, acceleration, step length among the others. The presented paper addresses a comparison between two optimization methods, the Stretched Simulated Annealing and the Genetic Algorithm, that runs in an accurate and stable simulation model. Final results show the comparative study and demonstrate that optimization is a valid gait planning technique.
The Research on Web-Based Testing Environment Using Simulated Annealing Algorithm
2014-01-01
The computerized evaluation is now one of the most important methods to diagnose learning; with the application of artificial intelligence techniques in the field of evaluation, the computerized adaptive testing gradually becomes one of the most important evaluation methods. In this test, the computer dynamic updates the learner's ability level and selects tailored items from the item pool. In order to meet the needs of the test it requires that the system has a relatively high efficiency of the implementation. To solve this problem, we proposed a novel method of web-based testing environment based on simulated annealing algorithm. In the development of the system, through a series of experiments, we compared the simulated annealing method and other methods of the efficiency and efficacy. The experimental results show that this method ensures choosing nearly optimal items from the item bank for learners, meeting a variety of assessment needs, being reliable, and having valid judgment in the ability of learners. In addition, using simulated annealing algorithm to solve the computing complexity of the system greatly improves the efficiency of select items from system and near-optimal solutions. PMID:24959600
Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei
2016-01-01
We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the "0" and "1" elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements. PMID:27034110
NASA Astrophysics Data System (ADS)
Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei
2016-04-01
We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the “0” and “1” elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements.
Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei
2016-01-01
We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the “0” and “1” elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements. PMID:27034110
A permutation based simulated annealing algorithm to predict pseudoknotted RNA secondary structures.
Tsang, Herbert H; Wiese, Kay C
2015-01-01
Pseudoknots are RNA tertiary structures which perform essential biological functions. This paper discusses SARNA-Predict-pk, a RNA pseudoknotted secondary structure prediction algorithm based on Simulated Annealing (SA). The research presented here extends previous work of SARNA-Predict and further examines the effect of the new algorithm to include prediction of RNA secondary structure with pseudoknots. An evaluation of the performance of SARNA-Predict-pk in terms of prediction accuracy is made via comparison with several state-of-the-art prediction algorithms using 20 individual known structures from seven RNA classes. We measured the sensitivity and specificity of nine prediction algorithms. Three of these are dynamic programming algorithms: Pseudoknot (pknotsRE), NUPACK, and pknotsRG-mfe. One is using the statistical clustering approach: Sfold and the other five are heuristic algorithms: SARNA-Predict-pk, ILM, STAR, IPknot and HotKnots algorithms. The results presented in this paper demonstrate that SARNA-Predict-pk can out-perform other state-of-the-art algorithms in terms of prediction accuracy. This supports the use of the proposed method on pseudoknotted RNA secondary structure prediction of other known structures. PMID:26558299
Nascimento, Moysés; Cruz, Cosme Damião; Peternelli, Luiz Alexandre; Campana, Ana Carolina Mota
2010-04-01
The efficiency of simulated annealing algorithms and rapid chain delineation in establishing the best linkage order, when constructing genetic maps, was evaluated. Linkage refers to the phenomenon by which two or more genes, or even more molecular markers, can be present in the same chromosome or linkage group. In order to evaluate the capacity of algorithms, four F(2) co-dominant populations, 50, 100, 200 and 1000 in size, were simulated. For each population, a genome with four linkage groups (100 cM) was generated. The linkage groups possessed 51, 21, 11 and 6 marks, respectively, and a corresponding distance of 2, 5, 10 and 20 cM between adjacent marks, thereby causing various degrees of saturation. For very saturated groups, with an adjacent distance between marks of 2 cM and in greater number, i.e., 51, the method based upon stochastic simulation by simulated annealing presented orders with distances equivalent to or lower than rapid chain delineation. Otherwise, the two methods were commensurate through presenting the same SARF distance. PMID:21637501
2010-01-01
The efficiency of simulated annealing algorithms and rapid chain delineation in establishing the best linkage order, when constructing genetic maps, was evaluated. Linkage refers to the phenomenon by which two or more genes, or even more molecular markers, can be present in the same chromosome or linkage group. In order to evaluate the capacity of algorithms, four F2 co-dominant populations, 50, 100, 200 and 1000 in size, were simulated. For each population, a genome with four linkage groups (100 cM) was generated. The linkage groups possessed 51, 21, 11 and 6 marks, respectively, and a corresponding distance of 2, 5, 10 and 20 cM between adjacent marks, thereby causing various degrees of saturation. For very saturated groups, with an adjacent distance between marks of 2 cM and in greater number, i.e., 51, the method based upon stochastic simulation by simulated annealing presented orders with distances equivalent to or lower than rapid chain delineation. Otherwise, the two methods were commensurate through presenting the same SARF distance. PMID:21637501
Simulated annealing algorithm for solving chambering student-case assignment problem
NASA Astrophysics Data System (ADS)
Ghazali, Saadiah; Abdul-Rahman, Syariza
2015-12-01
The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.
Marsh, Rebeccah E; Riauka, Terence A; McQuarrie, Steve A
2007-01-01
Increasingly, fractals are being incorporated into pharmacokinetic models to describe transport and chemical kinetic processes occurring in confined and heterogeneous spaces. However, fractal compartmental models lead to differential equations with power-law time-dependent kinetic rate coefficients that currently are not accommodated by common commercial software programs. This paper describes a parameter optimization method for fitting individual pharmacokinetic curves based on a simulated annealing (SA) algorithm, which always converged towards the global minimum and was independent of the initial parameter values and parameter bounds. In a comparison using a classical compartmental model, similar fits by the Gauss-Newton and Nelder-Mead simplex algorithms required stringent initial estimates and ranges for the model parameters. The SA algorithm is ideal for fitting a wide variety of pharmacokinetic models to clinical data, especially those for which there is weak prior knowledge of the parameter values, such as the fractal models. PMID:17706176
NASA Astrophysics Data System (ADS)
Jiang, Chunhua; Yang, Guobin; Zhu, Peng; Nishioka, Michi; Yokoyama, Tatsuhiro; Zhou, Chen; Song, Huan; Lan, Ting; Zhao, Zhengyu; Zhang, Yuannong
2016-05-01
This paper presents a new method to reconstruct the vertical electron density profile based on vertical Total Electron Content (TEC) using the simulated annealing algorithm. The present technique used the Quasi-parabolic segments (QPS) to model the bottomside ionosphere. The initial parameters of the ionosphere model were determined from both International Reference Ionosphere (IRI) (Bilitza et al., 2014) and vertical TEC (vTEC). Then, the simulated annealing algorithm was used to search the best-fit parameters of the ionosphere model by comparing with the GPS-TEC. The performance and robust of this technique were verified by ionosonde data. The critical frequency (foF2) and peak height (hmF2) of the F2 layer obtained from ionograms recorded at different locations and on different days were compared with those calculated by the proposed method. The analysis of results shows that the present method is inspiring for obtaining foF2 from vTEC. However, the accuracy of hmF2 needs to be improved in the future work.
Prediction of Flood Warning in Taiwan Using Nonlinear SVM with Simulated Annealing Algorithm
NASA Astrophysics Data System (ADS)
Lee, C.
2013-12-01
The issue of the floods is important in Taiwan. It is because the narrow and high topography of the island make lots of rivers steep in Taiwan. The tropical depression likes typhoon always causes rivers to flood. Prediction of river flow under the extreme rainfall circumstances is important for government to announce the warning of flood. Every time typhoon passed through Taiwan, there were always floods along some rivers. The warning is classified to three levels according to the warning water levels in Taiwan. The propose of this study is to predict the level of floods warning from the information of precipitation, rainfall duration and slope of riverbed. To classify the level of floods warning by the above-mentioned information and modeling the problems, a machine learning model, nonlinear Support vector machine (SVM), is formulated to classify the level of floods warning. In addition, simulated annealing (SA), a probabilistic heuristic algorithm, is used to determine the optimal parameter of the SVM model. A case study of flooding-trend rivers of different gradients in Taiwan is conducted. The contribution of this SVM model with simulated annealing is capable of making efficient announcement for flood warning and keeping the danger of flood from residents along the rivers.
Douglas, Julie A.; Sandefur, Conner I.
2010-01-01
Summary In family-based genetic studies, it is often useful to identify a subset of unrelated individuals. When such studies are conducted in population isolates, however, most if not all individuals are often detectably related to each other. To identify a set of maximally unrelated (or equivalently, minimally related) individuals, we have implemented simulated annealing, a general-purpose algorithm for solving difficult combinatorial optimization problems. We illustrate our method on data from a genetic study in the Old Order Amish of Lancaster County, Pennsylvania, a population isolate derived from a modest number of founders. Given one or more pedigrees, our program automatically and rapidly extracts a fixed number of maximally unrelated individuals. PMID:18321883
[The utility boiler low NOx combustion optimization based on ANN and simulated annealing algorithm].
Zhou, Hao; Qian, Xinping; Zheng, Ligang; Weng, Anxin; Cen, Kefa
2003-11-01
With the developing restrict environmental protection demand, more attention was paid on the low NOx combustion optimizing technology for its cheap and easy property. In this work, field experiments on the NOx emissions characteristics of a 600 MW coal-fired boiler were carried out, on the base of the artificial neural network (ANN) modeling, the simulated annealing (SA) algorithm was employed to optimize the boiler combustion to achieve a low NOx emissions concentration, and the combustion scheme was obtained. Two sets of SA parameters were adopted to find a better SA scheme, the result show that the parameters of T0 = 50 K, alpha = 0.6 can lead to a better optimizing process. This work can give the foundation of the boiler low NOx combustion on-line control technology. PMID:14768567
A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.
Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel
2015-03-01
Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions. PMID:25056743
NASA Astrophysics Data System (ADS)
Xin-Ze, Lu; Gui-Fang, Shao; Liang-You, Xu; Tun-Dong, Liu; Yu-Hua, Wen
2016-05-01
Alloy nanoparticles exhibit higher catalytic activity than monometallic nanoparticles, and their stable structures are of importance to their applications. We employ the simulated annealing algorithm to systematically explore the stable structure and segregation behavior of tetrahexahedral Pt–Pd–Cu–Au quaternary alloy nanoparticles. Three alloy nanoparticles consisting of 443 atoms, 1417 atoms, and 3285 atoms are considered and compared. The preferred positions of atoms in the nanoparticles are analyzed. The simulation results reveal that Cu and Au atoms tend to occupy the surface, Pt atoms preferentially occupy the middle layers, and Pd atoms tend to segregate to the inner layers. Furthermore, Au atoms present stronger surface segregation than Cu ones. This study provides a fundamental understanding on the structural features and segregation phenomena of multi-metallic nanoparticles. Project supported by the National Natural Science Foundation of China (Grant Nos. 51271156, 11474234, and 61403318) and the Natural Science Foundation of Fujian Province of China (Grant Nos. 2013J01255 and 2013J06002).
Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design
NASA Astrophysics Data System (ADS)
Liu, Li; Olszewski, Piotr; Goh, Pong-Chai
A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.
NASA Astrophysics Data System (ADS)
Sulaiman, Salina; Bade, Abdullah; Lee, Rechard; Tanalol, Siti Hasnah
2014-07-01
Mass Spring Model (MSM) is a highly efficient model in terms of calculations and easy implementation. Mass, spring stiffness coefficient and damping constant are three major components of MSM. This paper focuses on identifying the coefficients of spring stiffness and damping constant using automated tuning method by optimization in generating human liver model capable of responding quickly. To achieve the objective two heuristic approaches are used, namely Simulated Annealing (SA) and Genetic Algorithm (GA) on the human liver model data set. The properties of the mechanical heart, which are taken into consideration, are anisotropy and viscoelasticity. Optimization results from SA and GA are then implemented into the MSM to model two human hearts, each with its SA or GA construction parameters. These techniques are implemented while making FEM construction parameters as benchmark. Step size response of both models are obtained after MSMs were solved using Fourth Order Runge-Kutta (RK4) to compare the elasticity response of both models. Remodelled time using manual calculation methods was compared against heuristic optimization methods of SA and GA in showing that model with automatic construction is more realistic in terms of realtime interaction response time. Liver models generated using SA and GA optimization techniques are compared with liver model from manual calculation. It shows that the reconstruction time required for 1000 repetitions of SA and GA is faster than the manual method. Meanwhile comparison between construction time of SA and GA model indicates that model SA is faster than GA with varying rates of time 0.110635 seconds/1000 repetitions. Real-time interaction of mechanical properties is dependent on rate of time and speed of remodelling process. Thus, the SA and GA have proven to be suitable in enhancing realism of simulated real-time interaction in liver remodelling.
The fast simulated annealing algorithm applied to the search problem in LEED
NASA Astrophysics Data System (ADS)
Nascimento, V. B.; de Carvalho, V. E.; de Castilho, C. M. C.; Costa, B. V.; Soares, E. A.
2001-07-01
In this work we present new results obtained from the application of the fast simulated algorithm (FSA) to the surface structure determination of the Ag(1 1 0) and CdTe(1 1 0) systems. The influence of a control parameter, the "initial temperature", on the FSA search process was investigated. A scaling behaviour, that measures the efficiency of a search method as a function of the number of parameters to be varied, was obtained for the FSA algorithm, and indicated a favourable linear scaling ( N1).
Simulation of Storm Occurrences Using Simulated Annealing.
NASA Astrophysics Data System (ADS)
Lokupitiya, Ravindra S.; Borgman, Leon E.; Anderson-Sprecher, Richard
2005-11-01
Modeling storm occurrences has become a vital part of hurricane prediction. In this paper, a method for simulating event occurrences using a simulated annealing algorithm is described. The method is illustrated using annual counts of hurricanes and of tropical storms in the Atlantic Ocean and Gulf of Mexico. Simulations closely match distributional properties, including possible correlations, in the historical data. For hurricanes, traditionally used Poisson and negative binomial processes also predict univariate properties well, but for tropical storms parametric methods are less successful. The authors determined that simulated annealing replicates properties of both series. Simulated annealing can be designed so that simulations mimic historical distributional properties to whatever degree is desired, including occurrence of extreme events and temporal patterning.
GenAnneal: Genetically modified Simulated Annealing
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Lagaris, Isaac E.
2006-05-01
A modification of the standard Simulated Annealing (SA) algorithm is presented for finding the global minimum of a continuous multidimensional, multimodal function. We report results of computational experiments with a set of test functions and we compare to methods of similar structure. The accompanying software accepts objective functions coded both in Fortran 77 and C++. Program summaryTitle of program:GenAnneal Catalogue identifier:ADXI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXI_v1_0 Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: The tool is designed to be portable in all systems running the GNU C++ compiler Installation: University of Ioannina, Greece on Linux based machines Programming language used:GNU-C++, GNU-C, GNU Fortran 77 Memory required to execute with typical data: 200 KB No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.:84 885 No. of lines in distributed program, including test data, etc.:14 896 Distribution format: tar.gz Nature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques are frequently trapped in local minima. Global optimization is hence the appropriate tool. For example, solving a non-linear system of equations via optimization, employing a "least squares" type of objective, one may encounter many local minima that do not correspond to solutions (i.e. they are far from zero). Typical running time: Depending on the objective function. Method of solution: We modified the process of step selection that the traditional Simulated
Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca
2007-02-01
Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.
Annealed Importance Sampling Reversible Jump MCMC algorithms
Karagiannis, Georgios; Andrieu, Christophe
2013-03-20
It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.
NASA Astrophysics Data System (ADS)
Rajan, C. Christober Asir
2010-10-01
The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.
NASA Astrophysics Data System (ADS)
Deng, Shuang; Xiang, Wenting; Tian, Yangge
2009-10-01
Map coloring is a hard task even to the experienced map experts. In the GIS project, usually need to color map according to the customer, which make the work more complex. With the development of GIS, more and more programmers join the project team, which lack the training of cartology, their coloring map are harder to meet the requirements of customer. From the experience, customers with similar background usually have similar tastes for coloring map. So, we developed a GIS color scheme decision-making system which can select color schemes of similar customers from case base for customers to select and adjust. The system is a BS/CS mixed system, the client side use JSP and make it possible for the system developers to go on remote calling of the colors scheme cases in the database server and communicate with customers. Different with general case-based reasoning, even the customers are very similar, their selection may have difference, it is hard to provide a "best" option. So, we select the Simulated Annealing Algorithm (SAA) to arrange the emergence order of different color schemes. Customers can also dynamically adjust certain features colors based on existing case. The result shows that the system can facilitate the communication between the designers and the customers and improve the quality and efficiency of coloring map.
Chang, Yin-Jung; Chen, Yu-Ting
2011-07-01
Broadband omnidirectional antireflection (AR) coatings for solar cells optimized using simulated annealing (SA) algorithm incorporated with the solar (irradiance) spectrum at Earth's surface (AM1.57 radiation) are described. Material dispersions and reflections from the planar backside metal are considered in the rigorous electromagnetic calculations. Optimized AR coatings for bulk crystalline Si and thin-film CuIn(1-x)GaxSe(2) (CIGS) solar cells as two representative cases are presented and the effect of solar spectrum in the AR coating designs is investigated. In general, the angle-averaged reflectance of a solar-spectrum-incorporated AR design is shown to be smaller and more uniform in the spectral range with relatively stronger solar irradiance. By incorporating the transparent conductive and buffer layers as part of the AR coating in CIGS solar cells (2μm-thick CIGS layer), a single MgF(2) layer could provide an average reflectance of 8.46% for wavelengths ranging from 350 nm to 1200 nm and incident angles from 0° to 80°. PMID:21747557
Simulated annealing based algorithm for identifying mutated driver pathways in cancer.
Li, Hai-Tao; Zhang, Yu-Lang; Zheng, Chun-Hou; Wang, Hong-Qiang
2014-01-01
With the development of next-generation DNA sequencing technologies, large-scale cancer genomics projects can be implemented to help researchers to identify driver genes, driver mutations, and driver pathways, which promote cancer proliferation in large numbers of cancer patients. Hence, one of the remaining challenges is to distinguish functional mutations vital for cancer development, and filter out the unfunctional and random "passenger mutations." In this study, we introduce a modified method to solve the so-called maximum weight submatrix problem which is used to identify mutated driver pathways in cancer. The problem is based on two combinatorial properties, that is, coverage and exclusivity. Particularly, we enhance an integrative model which combines gene mutation and expression data. The experimental results on simulated data show that, compared with the other methods, our method is more efficient. Finally, we apply the proposed method on two real biological datasets. The results show that our proposed method is also applicable in real practice. PMID:24982873
ERIC Educational Resources Information Center
Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin
2007-01-01
Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…
The annealing robust backpropagation (ARBP) learning algorithm.
Chuang, C C; Su, S F; Hsiao, C C
2000-01-01
Multilayer feedforward neural networks are often referred to as universal approximators. Nevertheless, if the used training data are corrupted by large noise, such as outliers, traditional backpropagation learning schemes may not always come up with acceptable performance. Even though various robust learning algorithms have been proposed in the literature, those approaches still suffer from the initialization problem. In those robust learning algorithms, the so-called M-estimator is employed. For the M-estimation type of learning algorithms, the loss function is used to play the role in discriminating against outliers from the majority by degrading the effects of those outliers in learning. However, the loss function used in those algorithms may not correctly discriminate against those outliers. In this paper, the annealing robust backpropagation learning algorithm (ARBP) that adopts the annealing concept into the robust learning algorithms is proposed to deal with the problem of modeling under the existence of outliers. The proposed algorithm has been employed in various examples. Those results all demonstrated the superiority over other robust learning algorithms independent of outliers. In the paper, not only is the annealing concept adopted into the robust learning algorithms but also the annealing schedule k/t was found experimentally to achieve the best performance among other annealing schedules, where k is a constant and is the epoch number. PMID:18249835
Thin-film designs by simulated annealing
NASA Astrophysics Data System (ADS)
Boudet, T.; Chaton, P.; Herault, L.; Gonon, G.; Jouanet, L.; Keller, P.
1996-11-01
With the increasing power of computers, new methods in synthesis of optical multilayer systems have appeared. Among these, the simulated-annealing algorithm has proved its efficiency in several fields of physics. We propose to show its performances in the field of optical multilayer systems through different filter designs.
An Introduction to Simulated Annealing
ERIC Educational Resources Information Center
Albright, Brian
2007-01-01
An attempt to model the physical process of annealing lead to the development of a type of combinatorial optimization algorithm that takes on the problem of getting trapped in a local minimum. The author presents a Microsoft Excel spreadsheet that illustrates how this works.
Guo, Hao; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489
Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489
NASA Astrophysics Data System (ADS)
Kaplan, Sezgin; Rabadi, Ghaith
2013-01-01
This article addresses the aerial refuelling scheduling problem (ARSP), where a set of fighter jets (jobs) with certain ready times must be refuelled from tankers (machines) by their due dates; otherwise, they reach a low fuel level (deadline) incurring a high cost. ARSP is an identical parallel machine scheduling problem with release times and due date-to-deadline windows to minimize the total weighted tardiness. A simulated annealing (SA) and metaheuristic for randomized priority search (Meta-RaPS) with the newly introduced composite dispatching rule, apparent piecewise tardiness cost with ready times (APTCR), are applied to the problem. Computational experiments compared the algorithms' solutions to optimal solutions for small problems and to each other for larger problems. To obtain optimal solutions, a mixed integer program with a piecewise weighted tardiness objective function was solved for up to 12 jobs. The results show that Meta-RaPS performs better in terms of average relative error but SA is more efficient.
A guided simulated annealing method for crystallography.
Chou, C I; Lee, T K
2002-01-01
A new optimization algorithm, the guided simulated annealing method, for use in X-ray crystallographic studies is presented. In the traditional simulated annealing method, the search for the global minimum of a cost function is only determined by the ratio of energy change to the temperature. This method designs a new quality function to guide the search for a minimum. Using a multiresolution process, the method is much more efficient in finding the global minimum than the traditional method. Results for two large molecules, isoleucinomycin (C(60)H(102)N(6)O(18)) and an alkyl calix (C(72)H(112)O(8). 4C(2)H(6)O), with different space groups are reported. PMID:11752762
Simulated annealing model of acupuncture
NASA Astrophysics Data System (ADS)
Shang, Charles; Szu, Harold
2015-05-01
The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.
NASA Astrophysics Data System (ADS)
Ghaderi, F.; Pahlavani, P.
2015-12-01
A multimodal multi-criteria route planning (MMRP) system provides an optimal multimodal route from an origin point to a destination point considering two or more criteria in a way this route can be a combination of public and private transportation modes. In this paper, the simulate annealing (SA) and the fuzzy analytical hierarchy process (fuzzy AHP) were combined in order to find this route. In this regard, firstly, the effective criteria that are significant for users in their trip were determined. Then the weight of each criterion was calculated using the fuzzy AHP weighting method. The most important characteristic of this weighting method is the use of fuzzy numbers that aids the users to consider their uncertainty in pairwise comparison of criteria. After determining the criteria weights, the proposed SA algorithm were used for determining an optimal route from an origin to a destination. One of the most important problems in a meta-heuristic algorithm is trapping in local minima. In this study, five transportation modes, including subway, bus rapid transit (BRT), taxi, walking, and bus were considered for moving between nodes. Also, the fare, the time, the user's bother, and the length of the path were considered as effective criteria for solving the problem. The proposed model was implemented in an area in centre of Tehran in a GUI MATLAB programming language. The results showed a high efficiency and speed of the proposed algorithm that support our analyses.
Tycko, Robert; Hu, Kan-Nian
2010-01-01
We describe a computational approach to sequential resonance assignment in solid state NMR studies of uniformly 15N,13C-labeled proteins with magic-angle spinning. As input, the algorithm uses only the protein sequence and lists of 15N/13Cα crosspeaks from 2D NCACX and NCOCX spectra that include possible residue-type assignments of each crosspeak. Assignment of crosspeaks to specific residues is carried out by a Monte Carlo/simulated annealing algorithm, implemented in the program MC_ASSIGN1. The algorithm tolerates substantial ambiguity in residue-type assignments and coexistence of visible and invisible segments in the protein sequence. We use MC_ASSIGN1 and our own 2D spectra to replicate and extend the sequential assignments for uniformly labeled HET-s(218-289) fibrils previously determined manually by Siemer et al. (J. Biomolec. NMR, vol. 34, pp. 75-87, 2006) from a more extensive set of 2D and 3D spectra. Accurate assignments by MC_ASSIGN1 do not require data that are of exceptionally high quality. Use of MC_ASSIGN1 (and its extensions to other types of 2D and 3D data) is likely to alleviate many of the difficulties and uncertainties associated with manual resonance assignments in solid state NMR studies of uniformly labeled proteins, where spectral resolution and signal-to-noise are often sub-optimal. PMID:20547467
NASA Astrophysics Data System (ADS)
Luangpaiboon, P.
2009-10-01
Many entrepreneurs face to extreme conditions for instances; costs, quality, sales and services. Moreover, technology has always been intertwined with our demands. Then almost manufacturers or assembling lines adopt it and come out with more complicated process inevitably. At this stage, products and service improvement need to be shifted from competitors with sustainability. So, a simulated process optimisation is an alternative way for solving huge and complex problems. Metaheuristics are sequential processes that perform exploration and exploitation in the solution space aiming to efficiently find near optimal solutions with natural intelligence as a source of inspiration. One of the most well-known metaheuristics is called Ant Colony Optimisation, ACO. This paper is conducted to give an aid in complicatedness of using ACO in terms of its parameters: number of iterations, ants and moves. Proper levels of these parameters are analysed on eight noisy continuous non-linear continuous response surfaces. Considering the solution space in a specified region, some surfaces contain global optimum and multiple local optimums and some are with a curved ridge. ACO parameters are determined through hybridisations of Modified Simplex and Simulated Annealing methods on the path of Steepest Ascent, SAM. SAM was introduced to recommend preferable levels of ACO parameters via statistically significant regression analysis and Taguchi's signal to noise ratio. Other performance achievements include minimax and mean squared error measures. A series of computational experiments using each algorithm were conducted. Experimental results were analysed in terms of mean, design points and best so far solutions. It was found that results obtained from a hybridisation with stochastic procedures of Simulated Annealing method were better than that using Modified Simplex algorithm. However, the average execution time of experimental runs and number of design points using hybridisations were
Simulated quantum annealing of double-well and multiwell potentials.
Inack, E M; Pilati, S
2015-11-01
We analyze the performance of quantum annealing as a heuristic optimization method to find the absolute minimum of various continuous models, including landscapes with only two wells and also models with many competing minima and with disorder. The simulations performed using a projective quantum Monte Carlo (QMC) algorithm are compared with those based on the finite-temperature path-integral QMC technique and with classical annealing. We show that the projective QMC algorithm is more efficient than the finite-temperature QMC technique, and that both are inferior to classical annealing if this is performed with appropriate long-range moves. However, as the difficulty of the optimization problem increases, classical annealing loses efficiency, while the projective QMC algorithm keeps stable performance and is finally the most effective optimization tool. We discuss the implications of our results for the outstanding problem of testing the efficiency of adiabatic quantum computers using stochastic simulations performed on classical computers. PMID:26651813
Genetic-Annealing Algorithm in Grid Environment for Scheduling Problems
NASA Astrophysics Data System (ADS)
Cruz-Chávez, Marco Antonio; Rodríguez-León, Abelardo; Ávila-Melgar, Erika Yesenia; Juárez-Pérez, Fredy; Cruz-Rosales, Martín H.; Rivera-López, Rafael
This paper presents a parallel hybrid evolutionary algorithm executed in a grid environment. The algorithm executes local searches using simulated annealing within a Genetic Algorithm to solve the job shop scheduling problem. Experimental results of the algorithm obtained in the "Tarantula MiniGrid" are shown. Tarantula was implemented by linking two clusters from different geographic locations in Mexico (Morelos-Veracruz). The technique used to link the two clusters and configure the Tarantula MiniGrid is described. The effects of latency in communication between the two clusters are discussed. It is shown that the evolutionary algorithm presented is more efficient working in Grid environments because it can carry out major exploration and exploitation of the solution space.
Applications of an MPI Enhanced Simulated Annealing Algorithm on nuSTORM and 6D Muon Cooling
Liu, A.
2015-06-01
The nuSTORM decay ring is a compact racetrack storage ring with a circumference ~480 m using large aperture ($\\phi$ = 60 cm) magnets. The design goal of the ring is to achieve a momentum acceptance of 3.8 $\\pm$10% GeV/c and a phase space acceptance of 2000 $\\mu$m·rad. The design has many challenges because the acceptance will be affected by many nonlinearity terms with large particle emittance and/or large momentum offset. In this paper, we present the application of a meta-heuristic optimization algorithm to the sextupole correction in the ring. The algorithm is capable of finding a balanced compromise among corrections of the nonlinearity terms, and finding the largest acceptance. This technique can be applied to the design of similar storage rings that store beams with wide transverse phase space and momentum spectra. We also present the recent study on the application of this algorithm to a part of the 6D muon cooling channel. The technique and the cooling concept will be applied to design a cooling channel for the extracted muon beam at nuSTORM in the future study.
Multiresolution simulated annealing for brain image analysis
NASA Astrophysics Data System (ADS)
Loncaric, Sven; Majcenic, Zoran
1999-05-01
Analysis of biomedical images is an important step in quantification of various diseases such as human spontaneous intracerebral brain hemorrhage (ICH). In particular, the study of outcome in patients having ICH requires measurements of various ICH parameters such as hemorrhage volume and their change over time. A multiresolution probabilistic approach for segmentation of CT head images is presented in this work. This method views the segmentation problem as a pixel labeling problem. In this application the labels are: background, skull, brain tissue, and ICH. The proposed method is based on the Maximum A-Posteriori (MAP) estimation of the unknown pixel labels. The MAP method maximizes the a-posterior probability of segmented image given the observed (input) image. Markov random field (MRF) model has been used for the posterior distribution. The MAP estimation of the segmented image has been determined using the simulated annealing (SA) algorithm. The SA algorithm is used to minimize the energy function associated with MRF posterior distribution function. A multiresolution SA (MSA) has been developed to speed up the annealing process. MSA is presented in detail in this work. A knowledge-based classification based on the brightness, size, shape and relative position toward other regions is performed at the end of the procedure. The regions are identified as background, skull, brain, ICH and calcifications.
Quantum annealing speedup over simulated annealing on random Ising chains
NASA Astrophysics Data System (ADS)
Zanca, Tommaso; Santoro, Giuseppe E.
2016-06-01
We show clear evidence of a quadratic speedup of a quantum annealing (QA) Schrödinger dynamics over a Glauber master equation simulated annealing (SA) for a random Ising model in one dimension, via an equal-footing exact deterministic dynamics of the Jordan-Wigner fermionized problems. This is remarkable, in view of the arguments of H. G. Katzgraber et al. [Phys. Rev. X 4, 021008 (2014), 10.1103/PhysRevX.4.021008], since SA does not encounter any phase transition, while QA does. We also find a second remarkable result: that a "quantum-inspired" imaginary-time Schrödinger QA provides a further exponential speedup, i.e., an asymptotic residual error decreasing as a power law τ-μ of the annealing time τ .
Remediation tradeoffs addressed with simulated annealing optimization
Rogers, L. L., LLNL
1998-02-01
Escalation of groundwater remediation costs has encouraged both advances in optimization techniques to balance remediation objectives and economics and development of innovative technologies to expedite source region clean-ups. We present an optimization application building on a pump-and-treat model, yet assuming a prior removal of different portions of the source area to address the evolving management issue of more aggressive source remediation. Separate economic estimates of in-situ thermal remediation are combined with the economic estimates of the subsequent optimal pump-and-treat remediation to observe tradeoff relationships of cost vs. highest remaining contamination levels (hot spot). The simulated annealing algorithm calls the flow and transport model to evaluate the success of a proposed remediation scenario at a U.S.A. Superfund site contaminated with volatile organic compounds (VOCs).
NASA Astrophysics Data System (ADS)
Bahrami, Saeed; Doulati Ardejani, Faramarz; Baafi, Ernest
2016-05-01
In this study, hybrid models are designed to predict groundwater inflow to an advancing open pit mine and the hydraulic head (HH) in observation wells at different distances from the centre of the pit during its advance. Hybrid methods coupling artificial neural network (ANN) with genetic algorithm (GA) methods (ANN-GA), and simulated annealing (SA) methods (ANN-SA), were utilised. Ratios of depth of pit penetration in aquifer to aquifer thickness, pit bottom radius to its top radius, inverse of pit advance time and the HH in the observation wells to the distance of observation wells from the centre of the pit were used as inputs to the networks. To achieve the objective two hybrid models consisting of ANN-GA and ANN-SA with 4-5-3-1 arrangement were designed. In addition, by switching the last argument of the input layer with the argument of the output layer of two earlier models, two new models were developed to predict the HH in the observation wells for the period of the mining process. The accuracy and reliability of models are verified by field data, results of a numerical finite element model using SEEP/W, outputs of simple ANNs and some well-known analytical solutions. Predicted results obtained by the hybrid methods are closer to the field data compared to the outputs of analytical and simple ANN models. Results show that despite the use of fewer and simpler parameters by the hybrid models, the ANN-GA and to some extent the ANN-SA have the ability to compete with the numerical models.
Classical Simulated Annealing Using Quantum Analogues
NASA Astrophysics Data System (ADS)
La Cour, Brian R.; Troupe, James E.; Mark, Hans M.
2016-08-01
In this paper we consider the use of certain classical analogues to quantum tunneling behavior to improve the performance of simulated annealing on a discrete spin system of the general Ising form. Specifically, we consider the use of multiple simultaneous spin flips at each annealing step as an analogue to quantum spin coherence as well as modifications of the Boltzmann acceptance probability to mimic quantum tunneling. We find that the use of multiple spin flips can indeed be advantageous under certain annealing schedules, but only for long anneal times.
Classical Simulated Annealing Using Quantum Analogues
NASA Astrophysics Data System (ADS)
La Cour, Brian R.; Troupe, James E.; Mark, Hans M.
2016-06-01
In this paper we consider the use of certain classical analogues to quantum tunneling behavior to improve the performance of simulated annealing on a discrete spin system of the general Ising form. Specifically, we consider the use of multiple simultaneous spin flips at each annealing step as an analogue to quantum spin coherence as well as modifications of the Boltzmann acceptance probability to mimic quantum tunneling. We find that the use of multiple spin flips can indeed be advantageous under certain annealing schedules, but only for long anneal times.
Application of Simulated Annealing to Clustering Tuples in Databases.
ERIC Educational Resources Information Center
Bell, D. A.; And Others
1990-01-01
Investigates the value of applying principles derived from simulated annealing to clustering tuples in database design, and compares this technique with a graph-collapsing clustering method. It is concluded that, while the new method does give superior results, the expense involved in algorithm run time is prohibitive. (24 references) (CLB)
On simulated annealing phase transitions in phylogeny reconstruction.
Strobl, Maximilian A R; Barker, Daniel
2016-08-01
Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry. PMID:27150349
Modeling of protein loops by simulated annealing.
Collura, V.; Higo, J.; Garnier, J.
1993-01-01
A method is presented to model loops of protein to be used in homology modeling of proteins. This method employs the ESAP program of Higo et al. (Higo, J., Collura, V., & Garnier, J., 1992, Biopolymers 32, 33-43) and is based on a fast Monte Carlo simulation and a simulated annealing algorithm. The method is tested on different loops or peptide segments from immunoglobulin, bovine pancreatic trypsin inhibitor, and bovine trypsin. The predicted structure is obtained from the ensemble average of the coordinates of the Monte Carlo simulation at 300 K, which exhibits the lowest internal energy. The starting conformation of the loop prior to modeling is chosen to be completely extended, and a closing harmonic potential is applied to N, CA, C, and O atoms of the terminal residues. A rigid geometry potential of Robson and Platt (1986, J. Mol. Biol. 188, 259-281) with a united atom representation is used. This we demonstrate to yield a loop structure with good hydrogen bonding and torsion angles in the allowed regions of the Ramachandran map. The average accuracy of the modeling evaluated on the eight modeled loops is 1 A root mean square deviation (rmsd) for the backbone atoms and 2.3 A rmsd for all heavy atoms. PMID:8401234
Stochastic annealing simulation of cascades in metals
Heinisch, H.L.
1996-04-01
The stochastic annealing simulation code ALSOME is used to investigate quantitatively the differential production of mobile vacancy and SIA defects as a function of temperature for isolated 25 KeV cascades in copper generated by MD simulations. The ALSOME code and cascade annealing simulations are described. The annealing simulations indicate that the above Stage V, where the cascade vacancy clusters are unstable,m nearly 80% of the post-quench vacancies escape the cascade volume, while about half of the post-quench SIAs remain in clusters. The results are sensitive to the relative fractions of SIAs that occur in small, highly mobile clusters and large stable clusters, respectively, which may be dependent on the cascade energy.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036
Sparse approximation problem: how rapid simulated annealing succeeds and fails
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Kabashima, Yoshiyuki
2016-03-01
Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.
Model based matching using simulated annealing and a minimum representation size criterion
NASA Technical Reports Server (NTRS)
Ravichandran, B.; Sanderson, A. C.
1992-01-01
We define the model based matching problem in terms of the correspondence and transformation that relate the model and scene, and the search and evaluation measures needed to find the best correspondence and transformation. Simulated annealing is proposed as a method for search and optimization, and the minimum representation size criterion is used as the evaluation measure in an algorithm that finds the best correspondence. An algorithm based on simulated annealing is presented and evaluated. This algorithm is viewed as a part of an adaptive, hierarchical approach which provides robust results for a variety of model based matching problems.
Bernal, Javier; Torres-Jimenez, Jose
2015-01-01
SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data. PMID:26958442
Bernal, Javier; Torres-Jimenez, Jose
2015-01-01
SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller’s scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller’s algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller’s algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller’s algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data. PMID:26958442
NASA Astrophysics Data System (ADS)
Borovský, Michal; Weigel, Martin; Barash, Lev Yu.; Žukovič, Milan
2016-02-01
The population annealing algorithm is a novel approach to study systems with rough free-energy landscapes, such as spin glasses. It combines the power of simulated annealing, Boltzmann weighted differential reproduction and sequential Monte Carlo process to bring the population of replicas to the equilibrium even in the low-temperature region. Moreover, it provides a very good estimate of the free energy. The fact that population annealing algorithm is performed over a large number of replicas with many spin updates, makes it a good candidate for massive parallelism. We chose the GPU programming using a CUDA implementation to create a highly optimized simulation. It has been previously shown for the frustrated Ising antiferromagnet on the stacked triangular lattice with a ferromagnetic interlayer coupling, that standard Markov Chain Monte Carlo simulations fail to equilibrate at low temperatures due to the effect of kinetic freezing of the ferromagnetically ordered chains. We applied the population annealing to study the case with the isotropic intra- and interlayer antiferromagnetic coupling (J2/|J1| = -1). The reached ground states correspond to non-magnetic degenerate states, where chains are antiferromagnetically ordered, but there is no long-range ordering between them, which is analogical with Wannier phase of the 2D triangular Ising antiferromagnet.
Optimization of a Stochastically Simulated Gene Network Model via Simulated Annealing
Tomshine, Jonathan; Kaznessis, Yiannis N.
2006-01-01
By rearranging naturally occurring genetic components, gene networks can be created that display novel functions. When designing these networks, the kinetic parameters describing DNA/protein binding are of great importance, as these parameters strongly influence the behavior of the resulting gene network. This article presents an optimization method based on simulated annealing to locate combinations of kinetic parameters that produce a desired behavior in a genetic network. Since gene expression is an inherently stochastic process, the simulation component of simulated annealing optimization is conducted using an accurate multiscale simulation algorithm to calculate an ensemble of network trajectories at each iteration of the simulated annealing algorithm. Using the three-gene repressilator of Elowitz and Leibler as an example, we show that gene network optimizations can be conducted using a mechanistically realistic model integrated stochastically. The repressilator is optimized to give oscillations of an arbitrary specified period. These optimized designs may then provide a starting-point for the selection of genetic components needed to realize an in vivo system. PMID:16920827
Simulated annealing with probabilistic analysis for solving traveling salesman problems
NASA Astrophysics Data System (ADS)
Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.
Improving Simulated Annealing by Replacing Its Variables with Game-Theoretic Utility Maximizers
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bandari, Esfandiar; Tumer, Kagan
2001-01-01
The game-theory field of Collective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved as a side-effect. Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting significantly improves simulated annealing for a model of an economic process run over an underlying small-worlds topology. Furthermore, these experiments reveal novel small-worlds phenomena, and highlight the shortcomings of conventional mechanism design in bounded rationality domains.
Improving Simulated Annealing by Recasting it as a Non-Cooperative Game
NASA Technical Reports Server (NTRS)
Wolpert, David; Bandari, Esfandiar; Tumer, Kagan
2001-01-01
The game-theoretic field of COllective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved "as a side-effect". Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed game-theory-motivated algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting improves simulated annealing by several orders of magnitude for spin glass relaxation and bin-packing.
Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing
Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud
2015-01-01
This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, “MOPSOSA”. The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets. PMID:26132309
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a
A Annealing Algorithm for Designing Ligands from Receptor Structures.
NASA Astrophysics Data System (ADS)
Zielinski, Peter J.
DEenspace NOVO, a simulated annealing method for designing ligands is described. At a given temperature, ligand fragments are randomly selected and randomly placed within the given receptor cavity, often replacing existing ligand fragments. For each new ligand fragment combination, bonded, nonbonded, polarization and solvation energies of the new ligand-receptor system are compared to the previous system. Acceptance or rejection of the new system is decided using the Boltzmann distribution. Thus, energetically unfavorable fragment switches are sometimes accepted, sacrificing immediate energy gains in the interest of finding the system with the globally minimum energy. By lowering the temperature, the rate of unfavorable switches decreases and energetically favorable combinations become difficult to change. The process is halted when the frequency of switches becomes too small. As a test of the method, DEenspace NOVO predicted the positions of important ligand fragments for neuraminidase that are in accord with the natural ligand, sialic acid.
Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias
2015-01-01
Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.
Menin, O H; Martinez, A S; Costa, A M
2016-05-01
A generalized simulated annealing algorithm, combined with a suitable smoothing regularization function is used to solve the inverse problem of X-ray spectrum reconstruction from attenuation data. The approach is to set the initial acceptance and visitation temperatures and to standardize the terms of objective function to automate the algorithm to accommodate different spectra ranges. Experiments with both numerical and measured attenuation data are presented. Results show that the algorithm reconstructs spectra shapes accurately. It should be noted that in this algorithm, the regularization function was formulated to guarantee a smooth spectrum, thus, the presented technique does not apply to X-ray spectrum where characteristic radiation are present. PMID:26943902
Sampling design for classifying contaminant level using annealing search algorithms
NASA Astrophysics Data System (ADS)
Christakos, George; Killam, Bart R.
1993-12-01
A stochastic method for sampling spatially distributed contaminant level is presented. The purpose of sampling is to partition the contaminated region into zones of high and low pollutant concentration levels. In particular, given an initial set of observations of a contaminant within a site, it is desired to find a set of additional sampling locations in a way that takes into consideration the spatial variability characteristics of the site and optimizes certain objective functions emerging from the physical, regulatory and monetary considerations of the specific site cleanup process. Since the interest is in classifying the domain into zones above and below a pollutant threshold level, a natural criterion is the cost of misclassification. The resulting objective function is the expected value of a spatial loss function associated with sampling. Stochastic expectation involves the joint probability distribution of the pollutant level and its estimate, where the latter is calculated by means of spatial estimation techniques. Actual computation requires the discretization of the contaminated domain. As a consequence, any reasonably sized problem results in combinatorics precluding an exhaustive search. The use of an annealing algorithm, although suboptimal, can find a good set of future sampling locations quickly and efficiently. In order to obtain insight about the parameters and the computational requirements of the method, an example is discussed in detail. The implementation of spatial sampling design in practice will provide the model inputs necessary for waste site remediation, groundwater management, and environmental decision making.
Birefringence simulation of annealed ingot of calcium fluoride single crystal
NASA Astrophysics Data System (ADS)
Ogino, H.; Miyazaki, N.; Mabuchi, T.; Nawata, T.
2008-01-01
We developed a method for simulating birefringence of an annealed ingot of calcium fluoride single crystal caused by the residual stress after annealing process. The method comprises the heat conduction analysis that provides the temperature distribution during the ingot annealing, the elastic thermal stress analysis using the assumption of the stress-free temperature that provides the residual stress after annealing, and the birefringence analysis of an annealed ingot induced by the residual stress. The finite element method was applied to the heat conduction analysis and the elastic thermal stress analysis. In these analyses, the temperature dependence of material properties and the crystal anisotropy were taken into account. In the birefringence analysis, the photoelastic effect gives the change of refractive indices, from which the optical path difference in the annealed ingot is calculated by the Jones calculus. The relation between the Jones calculus and the approximate method using the stress components averaged along the optical path is discussed theoretically. It is found that the result of the approximate method agrees very well with that of the Jones calculus in birefringence analysis. The distribution pattern of the optical path difference in the annealed ingot obtained from the present birefringence calculation methods agrees reasonably well with that of the experiment. The calculated values also agree reasonably well with those of the experiment, when a stress-free temperature is adequately selected.
Neutronic optimization in high conversion Th-{sup 233}U fuel assembly with simulated annealing
Kotlyar, D.; Shwageraus, E.
2012-07-01
This paper reports on fuel design optimization of a PWR operating in a self sustainable Th-{sup 233}U fuel cycle. Monte Carlo simulated annealing method was used in order to identify the fuel assembly configuration with the most attractive breeding performance. In previous studies, it was shown that breeding may be achieved by employing heterogeneous Seed-Blanket fuel geometry. The arrangement of seed and blanket pins within the assemblies may be determined by varying the designed parameters based on basic reactor physics phenomena which affect breeding. However, the amount of free parameters may still prove to be prohibitively large in order to systematically explore the design space for optimal solution. Therefore, the Monte Carlo annealing algorithm for neutronic optimization is applied in order to identify the most favorable design. The objective of simulated annealing optimization is to find a set of design parameters, which maximizes some given performance function (such as relative period of net breeding) under specified constraints (such as fuel cycle length). The first objective of the study was to demonstrate that the simulated annealing optimization algorithm will lead to the same fuel pins arrangement as was obtained in the previous studies which used only basic physics phenomena as guidance for optimization. In the second part of this work, the simulated annealing method was used to optimize fuel pins arrangement in much larger fuel assembly, where the basic physics intuition does not yield clearly optimal configuration. The simulated annealing method was found to be very efficient in selecting the optimal design in both cases. In the future, this method will be used for optimization of fuel assembly design with larger number of free parameters in order to determine the most favorable trade-off between the breeding performance and core average power density. (authors)
Quantum versus simulated annealing in wireless interference network optimization
Wang, Chi; Chen, Huo; Jonckheere, Edmond
2016-01-01
Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed. PMID:27181056
Quantum versus simulated annealing in wireless interference network optimization
NASA Astrophysics Data System (ADS)
Wang, Chi; Chen, Huo; Jonckheere, Edmond
2016-05-01
Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.
Quantum versus simulated annealing in wireless interference network optimization.
Wang, Chi; Chen, Huo; Jonckheere, Edmond
2016-01-01
Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking-more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed. PMID:27181056
Molecular dynamic simulation of non-melt laser annealing process
NASA Astrophysics Data System (ADS)
Liren, Yan; Dai, Li; Wei, Zhang; Zhihong, Liu; Wei, Zhou; Quan, Wang
2016-03-01
Molecular dynamic simulation is performed to study the process of material annealing caused by a 266 nm pulsed laser. A micro-mechanism describing behaviors of silicon and impurity atoms during the laser annealing at a non-melt regime is proposed. After ion implantation, the surface of the Si wafer is acted by a high energy laser pulse, which loosens the material and partially frees both Si and impurity atoms. While the residual laser energy is absorbed by valence electrons, these atoms are recoiled and relocated to finally form a crystal. Energy-related movement behavior is observed by using the molecular dynamic method. The non-melt laser anneal appears to be quite sensitive to the energy density of the laser, as a small excess energy may causes a significant impurity diffusion. Such a result is also supported by our laser anneal experiment.
Sensitivity study on hydraulic well testing inversion using simulated annealing
Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi
1999-10-01
Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes element apertures for a cluster of fracture elements in order to improve the match to observed pressure transients. Aperture size is chosen randomly from a list of discrete apertures. The cluster size is held constant through the iterations. Since hydraulic inversion is inherently nonunique, it is important to use additional information. The authors investigated the relationship between the scale of heterogeneity and the optimal cluster size and shape to enhance convergence of the inversion and improve the results. In a spatially correlated transmissivity field, a cluster size corresponding to about 20% to 40% of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and based on an optimal cluster size; the practical range of the spatial correlation is estimated to be 5 to 10 m.
Sensitivity study on hydraulic well testing inversion using simulated annealing.
Nakao, S; Najita, J; Karasaki, K
1999-01-01
Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes element apertures for a cluster of fracture elements in order to improve the match to observed pressure transients. Aperture size is chosen randomly from a list of discrete apertures. The cluster size is held constant throughout the iterations. Since hydraulic inversion is inherently nonunique, it is important to use additional information. We investigated the relationship between the scale of heterogeneity and the optimal cluster size and shape to enhance convergence of the inversion and improve the results. In a spatially correlated transmissivity field, a cluster size corresponding to about 20 % to 40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and based on an optimal cluster size; the practical range of the spatial correlation is estimated to be 5 to 10 m. PMID:19125927
Metric optimisation for analogue forecasting by simulated annealing
NASA Astrophysics Data System (ADS)
Bliefernicht, J.; Bárdossy, A.
2009-04-01
It is well known that weather patterns tend to recur from time to time. This property of the atmosphere is used by analogue forecasting techniques. They have a long history in weather forecasting and there are many applications predicting hydrological variables at the local scale for different lead times. The basic idea of the technique is to identify past weather situations which are similar (analogue) to the predicted one and to take the local conditions of the analogues as forecast. But the forecast performance of the analogue method depends on user-defined criteria like the choice of the distance function and the size of the predictor domain. In this study we propose a new methodology of optimising both criteria by minimising the forecast error with simulated annealing. The performance of the methodology is demonstrated for the probability forecast of daily areal precipitation. It is compared with a traditional analogue forecasting algorithm, which is used operational as an element of a hydrological forecasting system. The study is performed for several meso-scale catchments located in the Rhine basin in Germany. The methodology is validated by a jack-knife method in a perfect prognosis framework for a period of 48 years (1958-2005). The predictor variables are derived from the NCEP/NCAR reanalysis data set. The Brier skill score and the economic value are determined to evaluate the forecast skill and value of the technique. In this presentation we will present the concept of the optimisation algorithm and the outcome of the comparison. It will be also demonstrated how a decision maker should apply a probability forecast to maximise the economic benefit from it.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-01-01
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-01-01
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289
Coordination Hydrothermal Interconnection Java-Bali Using Simulated Annealing
NASA Astrophysics Data System (ADS)
Wicaksono, B.; Abdullah, A. G.; Saputra, W. S.
2016-04-01
Hydrothermal power plant coordination aims to minimize the total cost of operating system that is represented by fuel costand constraints during optimization. To perform the optimization, there are several methods that can be used. Simulated Annealing (SA) is a method that can be used to solve the optimization problems. This method was inspired by annealing or cooling process in the manufacture of materials composed of crystals. The basic principle of hydrothermal power plant coordination includes the use of hydro power plants to support basic load while thermal power plants were used to support the remaining load. This study used two hydro power plant units and six thermal power plant units with 25 buses by calculating transmission losses and considering power limits in each power plant unit aided by MATLAB software during the process. Hydrothermal power plant coordination using simulated annealing plants showed that a total cost of generation for 24 hours is 13,288,508.01.
Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J. Javier; González-Flores, Carlos
2016-01-01
A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA. PMID:27413369
NASA Astrophysics Data System (ADS)
Tsoukalas, Ioannis; Kossieris, Panagiotis; Efstratiadis, Andreas; Makropoulos, Christos
2015-04-01
In water resources optimization problems, the calculation of the objective function usually presumes to first run a simulation model and then evaluate its outputs. In several cases, however, long simulation times may pose significant barriers to the optimization procedure. Often, to obtain a solution within a reasonable time, the user has to substantially restrict the allowable number of function evaluations, thus terminating the search much earlier than required by the problem's complexity. A promising novel strategy to address these shortcomings is the use of surrogate modelling techniques within global optimization algorithms. Here we introduce the Surrogate-Enhanced Evolutionary Annealing-Simplex (SE-EAS) algorithm that couples the strengths of surrogate modelling with the effectiveness and efficiency of the EAS method. The algorithm combines three different optimization approaches (evolutionary search, simulated annealing and the downhill simplex search scheme), in which key decisions are partially guided by numerical approximations of the objective function. The performance of the proposed algorithm is benchmarked against other surrogate-assisted algorithms, in both theoretical and practical applications (i.e. test functions and hydrological calibration problems, respectively), within a limited budget of trials (from 100 to 1000). Results reveal the significant potential of using SE-EAS in challenging optimization problems, involving time-consuming simulations.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Molecular dynamics simulation of annealed ZnO surfaces
Min, Tjun Kit; Yoon, Tiem Leong; Lim, Thong Leng
2015-04-24
The effect of thermally annealing a slab of wurtzite ZnO, terminated by two surfaces, (0001) (which is oxygen-terminated) and (0001{sup ¯}) (which is Zn-terminated), is investigated via molecular dynamics simulation by using reactive force field (ReaxFF). We found that upon heating beyond a threshold temperature of ∼700 K, surface oxygen atoms begin to sublimate from the (0001) surface. The ratio of oxygen leaving the surface at a given temperature increases as the heating temperature increases. A range of phenomena occurring at the atomic level on the (0001) surface has also been explored, such as formation of oxygen dimers on the surface and evolution of partial charge distribution in the slab during the annealing process. It was found that the partial charge distribution as a function of the depth from the surface undergoes a qualitative change when the annealing temperature is above the threshold temperature.
NASA Astrophysics Data System (ADS)
Szu, Harold H.
1993-09-01
Classical artificial neural networks (ANN) and neurocomputing are reviewed for implementing a real time medical image diagnosis. An algorithm known as the self-reference matched filter that emulates the spatio-temporal integration ability of the human visual system might be utilized for multi-frame processing of medical imaging data. A Cauchy machine, implementing a fast simulated annealing schedule, can determine the degree of abnormality by the degree of orthogonality between the patient imagery and the class of features of healthy persons. An automatic inspection process based on multiple modality image sequences is simulated by incorporating the following new developments: (1) 1-D space-filling Peano curves to preserve the 2-D neighborhood pixels' relationship; (2) fast simulated Cauchy annealing for the global optimization of self-feature extraction; and (3) a mini-max energy function for the intra-inter cluster-segregation respectively useful for top-down ANN designs.
Dynamically optimizing experiment schedules of a laboratory robot system with simulated annealing.
Cabrera, Cristina; Fine-Morris, Morgan; Pokross, Matthew; Kish, Kevin; Michalczyk, Stephen; Cahn, Matthew; Klei, Herbert; Russo, Mark F
2014-12-01
A scheduler has been developed for an integrated laboratory robot system that operates in an always-on mode. The integrated system is designed for imaging plates containing protein crystallization experiments, and it allows crystallographers to enter plates at any time and request that they be imaged at multiple time points in the future. The scheduler must rearrange tasks within the time it takes to image one plate, trading off the quality of the schedule for the speed of the computation. For this reason, the scheduler was based on a simulated annealing algorithm with an objective function that makes use of a linear programming solver. To optimize the scheduler, extensive computational simulations were performed involving a difficult but representative scheduling problem. The simulations explore multiple configurations of the simulated annealing algorithm, including both geometric and adaptive annealing schedules, 3 neighborhood functions, and 20 neighborhood diameters. An optimal configuration was found that produced the best results in less than 60 seconds, well within the window necessary to dynamically reschedule imaging tasks as new plates are entered into the system. PMID:25117530
Sensitivity study on hydraulic well testing inversion using simulated annealing
Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi
1997-11-01
For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion.
Resorting the NIST undulator using simulated annealing for field error reduction
NASA Astrophysics Data System (ADS)
Denbeaux, Greg; Johnson, Lewis E.; Madey, John M. J.
2000-06-01
We have used a simulated annealing algorithm to sort the samarium cobalt blocks and vanadium permendur poles in the hybrid NIST undulator to optimize the spectrum of the emitted light. While simulated annealing has proven highly effective in sorting of the SmCo blocks in pure REC undulators [1-5], the reliance on magnetically "soft" poles operating near saturation to concentrate the flux in hybrid undulators introduces a pair of additional variables—the permeability and saturation induction of the poles—which limit the utility of the assumption of superposition on which most simulated annealing codes rely. Detailed magnetic measurements clearly demonstrated the failure of the superposition principle due to random variations in the permeability in the "unsorted" NIST undulator. To deal with the issue, we measured both the magnetization of the REC blocks and the permeability of the NIST's integrated vanadium permendur poles, and implemented a sorting criteria which minimized the pole-to-pole variations in permeability to satisfy the criteria for realization of superposition on a nearest-neighbor basis. Though still imperfect, the computed spectrum of the radiation from the re-sorted and annealed NIST undulator is significantly superior to that of the original, unsorted device.
Huang, Feng-zhen; Yuan, Yan; Zhang, Xiu-bao; Wang, Qian; Zhou, Zhi-liang
2011-07-01
Phase correction is one of the key technologies in the spectrum recovery of the Fourier transform imaging spectrometer. The present paper proposes a correction method based on simulated annealing algorithm to calculate phase error, which overcomes the disadvantage of the existing methods that can not correct the interferogram with noise. The method determines the phase optimum solution by controlling the phase decrease function, attaining objective function value by correcting interferogram data with random phase value generated in the phase range, and determining the objective function increment in accordance with the Metropolis criterion. The simulation result of the algorithm indicates that the optimized phase error is less than 0.5%, and both the error accuracy and stability of the spectrum-recovered relative spectrum is less than 1%, which is a great improvement compared with the existing algorithm. PMID:21942072
Mauldon, A.D.; Karasaki, K.; Martel, S.J.; Long, J.C.S.; Landsfield, M.; Mensch, A. ); Vomvoris, S. )
1993-11-01
One of the characteristics of flow and transport in fractured rock is that the flow may be largely confined to a poorly connected network of fractures. In order to represent this condition, Lawrence Berkeley Laboratory has been developing a new type of fracture hydrology model called an equivalent discontinuum model. In this model we represent the discontinuous nature of the problem through flow on a partially filled lattice. This is done through a statistical inverse technique called [open quotes]simulated annealing.[close quotes] The fracture network model is [open quotes]annealed[close quotes] by continually modifying a base model, or [open quotes]template,[close quotes] so that with each modification, the model behaves more and more like the observed system. This template is constructed using geological and geophysical data to identify the regions that possibly conduct fluid an the probable orientations of channels that conduct fluid. In order to see how the simulated annealing algorithm works, we have developed a synthetic case. In this case, the geometry of the fracture network is completely known, so that the results of annealing to steady state data can be evaluated absolutely. We also analyze field data from the Migration Experiment at the Grimsel Rock Laboratory in Switzerland. 28 refs., 14 figs., 3 tabs.
Multigrid hierarchical simulated annealing method for reconstructing heterogeneous media.
Pant, Lalit M; Mitra, Sushanta K; Secanell, Marc
2015-12-01
A reconstruction methodology based on different-phase-neighbor (DPN) pixel swapping and multigrid hierarchical annealing is presented. The method performs reconstructions by starting at a coarse image and successively refining it. The DPN information is used at each refinement stage to freeze interior pixels of preformed structures. This preserves the large-scale structures in refined images and also reduces the number of pixels to be swapped, thereby resulting in a decrease in the necessary computational time to reach a solution. Compared to conventional single-grid simulated annealing, this method was found to reduce the required computation time to achieve a reconstruction by around a factor of 70-90, with the potential of even higher speedups for larger reconstructions. The method is able to perform medium sized (up to 300(3) voxels) three-dimensional reconstructions with multiple correlation functions in 36-47 h. PMID:26764849
Discrete-State Simulated Annealing For Traveling-Wave Tube Slow-Wave Circuit Optimization
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.; Bulson, Brian A.; Kory, Carol L.; Williams, W. Dan (Technical Monitor)
2001-01-01
Algorithms based on the global optimization technique of simulated annealing (SA) have proven useful in designing traveling-wave tube (TWT) slow-wave circuits for high RF power efficiency. The characteristic of SA that enables it to determine a globally optimized solution is its ability to accept non-improving moves in a controlled manner. In the initial stages of the optimization, the algorithm moves freely through configuration space, accepting most of the proposed designs. This freedom of movement allows non-intuitive designs to be explored rather than restricting the optimization to local improvement upon the initial configuration. As the optimization proceeds, the rate of acceptance of non-improving moves is gradually reduced until the algorithm converges to the optimized solution. The rate at which the freedom of movement is decreased is known as the annealing or cooling schedule of the SA algorithm. The main disadvantage of SA is that there is not a rigorous theoretical foundation for determining the parameters of the cooling schedule. The choice of these parameters is highly problem dependent and the designer needs to experiment in order to determine values that will provide a good optimization in a reasonable amount of computational time. This experimentation can absorb a large amount of time especially when the algorithm is being applied to a new type of design. In order to eliminate this disadvantage, a variation of SA known as discrete-state simulated annealing (DSSA), was recently developed. DSSA provides the theoretical foundation for a generic cooling schedule which is problem independent, Results of similar quality to SA can be obtained, but without the extra computational time required to tune the cooling parameters. Two algorithm variations based on DSSA were developed and programmed into a Microsoft Excel spreadsheet graphical user interface (GUI) to the two-dimensional nonlinear multisignal helix traveling-wave amplifier analysis program TWA3
Stochastic annealing simulations of defect interactions among subcascades
Heinisch, H.L.; Singh, B.N.
1997-04-01
The effects of the subcascade structure of high energy cascades on the temperature dependencies of annihilation, clustering and free defect production are investigated. The subcascade structure is simulated by closely spaced groups of lower energy MD cascades. The simulation results illustrate the strong influence of the defect configuration existing in the primary damage state on subsequent intracascade evolution. Other significant factors affecting the evolution of the defect distribution are the large differences in mobility and stability of vacancy and interstitial defects and the rapid one-dimensional diffusion of small, glissile interstitial loops produced directly in cascades. Annealing simulations are also performed on high-energy, subcascade-producing cascades generated with the binary collision approximation and calibrated to MD results.
NASA Astrophysics Data System (ADS)
Deist, T. M.; Gorissen, B. L.
2016-02-01
High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.
Deist, T M; Gorissen, B L
2016-02-01
High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data. PMID:26760757
Simulated annealing and entanglement of formation for (n ⊗m ) -dimensional mixed states
NASA Astrophysics Data System (ADS)
Allende, S.; Altbir, D.; Retamal, J. C.
2015-08-01
The simulated annealing algorithm, a method commonly used to calculate properties of condensed-matter systems, is used to calculate entanglement of formation for higher-dimensional mixed states. The method relies on representing a mixed state as a pure state in an enlarged Hilbert space, and on the search for the extension minimizing the convex roof of entanglement of formation. Exact results are found, and predictions for system reservoir dynamics in higher dimensions are confirmed. These findings open the way for new applications of the method in quantum information.
Solving the dynamic berth allocation problem by simulated annealing
NASA Astrophysics Data System (ADS)
Lin, Shih-Wei; Ting, Ching-Jung
2014-03-01
Berth allocation, the first operation when vessels arrive at a port, is one of the major container port optimization problems. From both the port operator's and the ocean carriers' perspective, minimization of the time a ship spends at berth is a key goal of port operations. This article focuses on two versions of the dynamic berth allocation problem (DBAP): discrete and continuous cases. The first case assigns ships to a given set of berth positions; the second one permits them to be moored anywhere along the berth. Simulated annealing (SA) approaches are proposed to solve the DBAP. The proposed SAs are tested with numerical instances for different sizes from the literature. Experimental results show that the proposed SA can obtain the optimal solutions in all instances of discrete cases and update 27 best known solutions in the continuous case.
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model. PMID:24989866
Formation Algorithms and Simulation Testbed
NASA Technical Reports Server (NTRS)
Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward
2004-01-01
Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.
Design of SLM-constrained MACE filters using simulated annealing optimization
NASA Astrophysics Data System (ADS)
Khan, Ajmal; Rajan, P. Karivaratha
1993-10-01
Among the available filters for pattern recognition, the MACE filter produces the sharpest peak with very small sidelobes. However, when these filters are implemented using practical spatial light modulators (SLMs), because of the constrained nature of the amplitude and phase modulation characteristics of the SLM, the implementation is no longer optimal. The resulting filter response does not produce high accuracy in the recognition of the test images. In this paper, this deterioration in response is overcome by designing constrained MACE filters such that the filter is allowed to have only those values of phase-amplitude combination that can be implemented on a specified SLM. The design is carried out using simulated annealing optimization technique. The algorithm developed and the results obtained on computer simulations of the designed filters are presented.
Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando
2009-08-13
Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.
Retrieval of Surface and Subsurface Moisture of Bare Soil Using Simulated Annealing
NASA Astrophysics Data System (ADS)
Tabatabaeenejad, A.; Moghaddam, M.
2009-12-01
Soil moisture is of fundamental importance to many hydrological and biological processes. Soil moisture information is vital to understanding the cycling of water, energy, and carbon in the Earth system. Knowledge of soil moisture is critical to agencies concerned with weather and climate, runoff potential and flood control, soil erosion, reservoir management, water quality, agricultural productivity, drought monitoring, and human health. The need to monitor the soil moisture on a global scale has motivated missions such as Soil Moisture Active and Passive (SMAP) [1]. Rough surface scattering models and remote sensing retrieval algorithms are essential in study of the soil moisture, because soil can be represented as a rough surface structure. Effects of soil moisture on the backscattered field have been studied since the 1960s, but soil moisture estimation remains a challenging problem and there is still a need for more accurate and more efficient inversion algorithms. It has been shown that the simulated annealing method is a powerful tool for inversion of the model parameters of rough surface structures [2]. The sensitivity of this method to measurement noise has also been investigated assuming a two-layer structure characterized by the layers dielectric constants, layer thickness, and statistical properties of the rough interfaces [2]. However, since the moisture profile varies with depth, it is sometimes necessary to model the rough surface as a layered structure with a rough interface on top and a stratified structure below where each layer is assumed to have a constant volumetric moisture content. In this work, we discretize the soil structure into several layers of constant moisture content to examine the effect of subsurface profile on the backscattering coefficient. We will show that while the moisture profile could vary in deeper layers, these layers do not affect the scattered electromagnetic field significantly. Therefore, we can use just a few layers
Redesigning rain gauges network in Johor using geostatistics and simulated annealing
NASA Astrophysics Data System (ADS)
Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif
2015-02-01
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.
Redesigning rain gauges network in Johor using geostatistics and simulated annealing
Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif
2015-02-03
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.
An archived multi-objective simulated annealing for a dynamic cellular manufacturing system
NASA Astrophysics Data System (ADS)
Shirazi, Hossein; Kia, Reza; Javadian, Nikbakhsh; Tavakkoli-Moghaddam, Reza
2014-05-01
To design a group layout of a cellular manufacturing system (CMS) in a dynamic environment, a multi-objective mixed-integer non-linear programming model is developed. The model integrates cell formation, group layout and production planning (PP) as three interrelated decisions involved in the design of a CMS. This paper provides an extensive coverage of important manufacturing features used in the design of CMSs and enhances the flexibility of an existing model in handling the fluctuations of part demands more economically by adding machine depot and PP decisions. Two conflicting objectives to be minimized are the total costs and the imbalance of workload among cells. As the considered objectives in this model are in conflict with each other, an archived multi-objective simulated annealing (AMOSA) algorithm is designed to find Pareto-optimal solutions. Matrix-based solution representation, a heuristic procedure generating an initial and feasible solution and efficient mutation operators are the advantages of the designed AMOSA. To demonstrate the efficiency of the proposed algorithm, the performance of AMOSA is compared with an exact algorithm (i.e., ∈-constraint method) solved by the GAMS software and a well-known evolutionary algorithm, namely NSGA-II for some randomly generated problems based on some comparison metrics. The obtained results show that the designed AMOSA can obtain satisfactory solutions for the multi-objective model.
Solving an one-dimensional cutting stock problem by simulated annealing and tabu search
NASA Astrophysics Data System (ADS)
Jahromi, Meghdad HMA; Tavakkoli-Moghaddam, Reza; Makui, Ahmad; Shamsi, Abbas
2012-10-01
A cutting stock problem is one of the main and classical problems in operations research that is modeled as LP problem. Because of its NP-hard nature, finding an optimal solution in reasonable time is extremely difficult and at least non-economical. In this paper, two meta-heuristic algorithms, namely simulated annealing (SA) and tabu search (TS), are proposed and developed for this type of the complex and large-sized problem. To evaluate the efficiency of these proposed approaches, several problems are solved using SA and TS, and then the related results are compared. The results show that the proposed SA gives good results in terms of objective function values rather than TS.
Shape optimization of road tunnel cross-section by simulated annealing
NASA Astrophysics Data System (ADS)
Sobótka, Maciej; Pachnicz, Michał
2016-06-01
The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA). The form of a cost function derives from the energetic optimality condition, formulated in the authors' previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.
OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2014-03-31
The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.
NASA Astrophysics Data System (ADS)
White, Ronald P.; Mayne, Howard R.
2000-05-01
An annealing schedule, T(t), is the temperature as function of time whose goal is to bring a system from some initial low-order state to a final high-order state. We use the probability in the lowest energy level as the order parameter, so that an ideally annealed system would have all its population in its ground-state. We consider a model system comprised of discrete energy levels separated by activation barriers. We have carried out annealing calculations on this system for a range of system parameters. In particular, we considered the schedule as a function of the energy level spacing, of the height of the activation barriers, and, in some cases, as a function of degeneracies of the levels. For a given set of physical parameters, and maximum available time, tm, we were able to obtain the optimal schedule by using a genetic algorithm (GA) approach. For the two-level system, analytic solutions are available, and were compared with the GA-optimized results. The agreement was essentially exact. We were able to identify systematic behaviors of the schedules and trends in final probabilities as a function of parameters. We have also carried out Metropolis Monte Carlo (MMC) calculations on simple potential energy functions using the optimal schedules available from the model calculations. Agreement between the model and MMC calculations was excellent.
A deterministic annealing algorithm for approximating a solution of the min-bisection problem.
Dang, Chuangyin; Ma, Wei; Liang, Jiye
2009-01-01
The min-bisection problem is an NP-hard combinatorial optimization problem. In this paper an equivalent linearly constrained continuous optimization problem is formulated and an algorithm is proposed for approximating its solution. The algorithm is derived from the introduction of a logarithmic-cosine barrier function, where the barrier parameter behaves as temperature in an annealing procedure and decreases from a sufficiently large positive number to zero. The algorithm searches for a better solution in a feasible descent direction, which has a desired property that lower and upper bounds are always satisfied automatically if the step length is a number between zero and one. We prove that the algorithm converges to at least a local minimum point of the problem if a local minimum point of the barrier problem is generated for a sequence of descending values of the barrier parameter with a limit of zero. Numerical results show that the algorithm is much more efficient than two of the best existing heuristic methods for the min-bisection problem, Kernighan-Lin method with multiple starting points (MSKL) and multilevel graph partitioning scheme (MLGP). PMID:18995985
NASA Astrophysics Data System (ADS)
Chen, Zheng; Mi, Chunting Chris; Xia, Bing; You, Chenwen
2014-12-01
In this paper, an energy management method is proposed for a power-split plug-in hybrid electric vehicle (PHEV). Through analyzing the PHEV powertrain, a series of quadratic equations are employed to approximate the vehicle's fuel-rate, using battery current as the input. Pontryagin's Minimum Principle (PMP) is introduced to find the battery current commands by solving the Hamiltonian function. Simulated Annealing (SA) algorithm is applied to calculate the engine-on power and the maximum current coefficient. Moreover, the battery state of health (SOH) is introduced to extend the application of the proposed algorithm. Simulation results verified that the proposed algorithm can reduce fuel-consumption compared to charge-depleting (CD) and charge-sustaining (CS) mode.
Fractal Landscape Algorithms for Environmental Simulations
NASA Astrophysics Data System (ADS)
Mao, H.; Moran, S.
2014-12-01
Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.
NASA Astrophysics Data System (ADS)
Theodorakos, I.; Zergioti, I.; Vamvakas, V.; Tsoukalas, D.; Raptis, Y. S.
2014-01-01
In this work, a picosecond diode pumped solid state laser and a nanosecond Nd:YAG laser have been used for the annealing and the partial nano-crystallization of an amorphous silicon layer. These experiments were conducted as an alternative/complementary to plasma-enhanced chemical vapor deposition method for fabrication of micromorph tandem solar cell. The laser experimental work was combined with simulations of the annealing process, in terms of temperature distribution evolution, in order to predetermine the optimum annealing conditions. The annealed material was studied, as a function of several annealing parameters (wavelength, pulse duration, fluence), as far as it concerns its structural properties, by X-ray diffraction, SEM, and micro-Raman techniques.
NASA Astrophysics Data System (ADS)
Gaillot, P.; Bardaine, T.; Lyon-Caen, H.
2004-12-01
Since recent years, various automatic phase pickers based on the wavelet transform have been developed. The main motivation for using wavelet transform is that they are excellent at finding the characteristics of transient signals, they have good time resolution at all periods, and they are easy to program for fast execution. Thus, the time-scale properties and flexibility of the wavelets allow detection of P and S phases in a broad frequency range making their utilization possible in various context. However, the direct application of an automatic picking program in a different context/network than the one for which it has been initially developed is quickly tedious. In fact, independently of the strategy involved in automatic picking algorithms (window average, autoregressive, beamforming, optimization filtering, neuronal network), all developed algorithms use different parameters that depend on the objective of the seismological study, the region and the seismological network. Classically, these parameters are manually defined by trial-error or calibrated learning stage. In order to facilitate this laborious process, we have developed an automated method that provide optimal parameters for the picking programs. The set of parameters can be explored using simulated annealing which is a generic name for a family of optimization algorithms based on the principle of stochastic relaxation. The optimization process amounts to systematically modifying an initial realization so as to decrease the value of the objective function, getting the realization acceptably close to the target statistics. Different formulations of the optimization problem (objective function) are discussed using (1) world seismicity data recorded by the French national seismic monitoring network (ReNass), (2) regional seismicity data recorded in the framework of the Corinth Rift Laboratory (CRL) experiment, (3) induced seismicity data from the gas field of Lacq (Western Pyrenees), and (4) micro
Fast simulated annealing inversion of surface waves on pavement using phase-velocity spectra
Ryden, N.; Park, C.B.
2006-01-01
The conventional inversion of surface waves depends on modal identification of measured dispersion curves, which can be ambiguous. It is possible to avoid mode-number identification and extraction by inverting the complete phase-velocity spectrum obtained from a multichannel record. We use the fast simulated annealing (FSA) global search algorithm to minimize the difference between the measured phase-velocity spectrum and that calculated from a theoretical layer model, including the field setup geometry. Results show that this algorithm can help one avoid getting trapped in local minima while searching for the best-matching layer model. The entire procedure is demonstrated on synthetic and field data for asphalt pavement. The viscoelastic properties of the top asphalt layer are taken into account, and the inverted asphalt stiffness as a function of frequency compares well with laboratory tests on core samples. The thickness and shear-wave velocity of the deeper embedded layers are resolved within 10% deviation from those values measured separately during pavement construction. The proposed method may be equally applicable to normal soil site investigation and in the field of ultrasonic testing of materials. ?? 2006 Society of Exploration Geophysicists.
Dynamical SCFT Simulations of Solvent Annealed Thin Films
NASA Astrophysics Data System (ADS)
Paradiso, Sean; Delaney, Kris; Ceniceros, Hector; Garcia-Cervera, Carlos; Fredrickson, Glenn
2014-03-01
Block copolymer thin films are ideal candidates for a broad range of technologies including rejection layers for ultrafiltration membranes, proton-exchange membranes in solar cells, optically active coatings, and lithographic masks for bit patterning storage media. Optimizing the performance of these materials often hinges on tuning the orientation and long-range order of the film's internal nanostructure. In response, solvent annealing techniques have been developed for their promise to afford additional flexibility in tuning thin film morphology, but pronounced processing history dependence and a dizzying parameter space have resulted in slow progress towards developing clear design rules for solvent annealing systems. In this talk, we will report recent theoretical progress in understanding the self assembly dynamics relevant to solvent-annealed and solution-cast block copolymer films. Emphasis will be placed on evaporation-induced ordering trends in both the slow and fast drying regimes for cylinder-forming block copolymers from initially ordered and disordered films, along with the role solvent selectivity plays in the ordering dynamics.
Empirical study of parallel LRU simulation algorithms
NASA Technical Reports Server (NTRS)
Carr, Eric; Nicol, David M.
1994-01-01
This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.
Identifying fracture-zone geometry using simulated annealing and hydraulic-connection data
Day-Lewis, F. D.; Hsieh, P.A.; Gorelick, S.M.
2000-01-01
A new approach is presented to condition geostatistical simulation of high-permeability zones in fractured rock to hydraulic-connection data. A simulated-annealing algorithm generates three-dimensional (3-D) realizations conditioned to borehole data, inferred hydraulic connections between packer-isolated borehole intervals, and an indicator (fracture zone or background-K bedrock) variogram model of spatial variability. We apply the method to data from the U.S. Geological Survey Mirror Lake Site in New Hampshire, where connected high-permeability fracture zones exert a strong control on fluid flow at the hundred-meter scale. Single-well hydraulic-packer tests indicate where permeable fracture zones intersect boreholes, and multiple-well pumping tests indicate the degree of hydraulic connection between boreholes. Borehole intervals connected by a fracture zone exhibit similar hydraulic responses, whereas intervals not connected by a fracture zone exhibit different responses. Our approach yields valuable insights into the 3-D geometry of fracture zones at Mirror Lake. Statistical analysis of the realizations yields maps of the probabilities of intersecting specific fracture zones with additional wells. Inverse flow modeling based on the assumption of equivalent porous media is used to estimate hydraulic conductivity and specific storage and to identify those fracture-zone geometries that are consistent with hydraulic test data.
Laser annealing and simulation of amorphous silicon thin films for solar cell applications
NASA Astrophysics Data System (ADS)
Theodorakos, I.; Raptis, Y. S.; Vamvakas, V.; Tsoukalas, D.; Zergioti, I.
2014-03-01
In this work, a picosecond DPSS and a nanosecond Nd:YAG laser have been used for the annealing and the partial nanocrystallization of an amorphous silicon layer. These experiments were conducted in order to improve the characteristics of a micromorph tandem solar cell. The laser annealing was attempted at 1064nm in order to obtain the desired crystallization's depth and ratios. Preliminary annealing-processes, with different annealing parameters, have been tested, such as fluence, repetition rate and number of pulses. Irradiations were applied in the sub-melt regime, in order to prevent significant diffusion of p- and n-dopants to take place within the structure. The laser experimental work was combined with simulations of the laser annealing process, in terms of temperature distribution evolution, using the Synopsys Sentaurus Process TCAD software. The optimum annealing conditions for the two different pulse durations were determined. Experimentally determined optical properties of our samples, such as the absorption coefficient and reflectivity, were used for a more realistic simulation. From the simulations results, a temperature profile, appropriate to yield the desired recrystallization was obtained for the case of ps pulses, which was verified from the experimental results described below. The annealed material was studied, as far as it concerns its structural properties, by XRD, SEM and micro-Raman techniques, providing consistent information on the characteristics of the nanocrystalline material produced by the laser annealing experiments. It was found that, with the use of ps pulses, the resultant polycrystalline region shows crystallization's ratios similar to a PECVD developed poly-Silicon layer, with slightly larger nanocrystallite's size.
Phase unwrapping algorithms in laser propagation simulation
NASA Astrophysics Data System (ADS)
Du, Rui; Yang, Lijia
2013-08-01
Currently simulating on laser propagation in atmosphere usually need to deal with beam in strong turbulence, which may lose a part of information via Fourier Transform to simulate the transmission, makes the phase of beam as a 2-D array wrap by 2π . An effective unwrapping algorithm is needed for continuing result and faster calculation. The unwrapping algorithms in atmospheric propagation are similar to the unwrapping algorithm in radar or 3-D surface rebuilding, but not the same. In this article, three classic unwrapping algorithms: the block least squares (BLS), mask-cut (MCUT), and the Flynn's minimal discontinuity algorithm (FMD) are tried in wave-front reconstruction simulation. Each of those algorithms are tested 100 times in 6 same conditions, including low(64x64), medium(128x128), and high(256x256) resolution phase array, with and without noises. Compared the results, the conclusions are delivered as follows. The BLS-based algorithm is the fastest, and the result is acceptable in low resolution environment without noise. The MCUT are higher in accuracy, though they are slower with the array resolution increased, and it is sensitive to noise, resulted in large area errors. Flynn's algorithm has the better accuracy, and it occupies large memory in calculation. After all, the article delivered a new algorithm that based on Active on Vertex (AOV) Network, to build a logical graph to cut the search space then find minimal discontinuity solution. The AOV is faster than MCUT in dealing with high resolution phase arrays, and better accuracy as FMD that has been tested.
An exact accelerated stochastic simulation algorithm
Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros
2009-01-01
An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present “ER-leap” algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2∕3 power of the number of reaction events in a Galton–Watson process. PMID:19368432
Optimization of pressurized water reactor shuffling by simulated annealing with heuristics
Stevens, J.G.; Smith, K.S.; Rempe, K.R.; Downar, T.J.
1995-09-01
Simulated-annealing optimization of reactor core loading patterns is implemented with support for design heuristics during candidate pattern generation. The SIMAN optimization module uses the advanced nodal method of SIMULATE-3 and the full cross-section detail of CASMO-3 to evaluate accurately the neutronic performance of each candidate, resulting in high-quality patterns. The use of heuristics within simulated annealing is explored. Heuristics improve the consistency of optimization results for both fast- and slow-annealing runs with no penalty from the exclusion of unusual candidates. Thus, the heuristic application of designer judgment during automated pattern generation is shown to be effective. The capability of the SIMAN module to find and evaluate families of loading patterns that satisfy design constraints and have good objective performance within practical run times is demonstrated. The use of automated evaluations of successive cycles to explore multicycle effects of design decisions is discussed.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak
1996-01-01
Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.
Genetic Algorithms for Digital Quantum Simulations
NASA Astrophysics Data System (ADS)
Las Heras, U.; Alvarez-Rodriguez, U.; Solano, E.; Sanz, M.
2016-06-01
We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.
Inversion of sonobuoy data from shallow-water sites with simulated annealing.
Lindwall, Dennis; Brozena, John
2005-02-01
An enhanced simulated annealing algorithm is used to invert sparsely sampled seismic data collected with sonobuoys to obtain seafloor geoacoustic properties at two littoral marine environments as well as for a synthetic data set. Inversion of field data from a 750-m water-depth site using a water-gun sound source found a good solution which included a pronounced subbottom reflector after 6483 iterations over seven variables. Field data from a 250-m water-depth site using an air-gun source required 35,421 iterations for a good inversion solution because 30 variables had to be solved for, including the shot-to-receiver offsets. The sonobuoy derived compressional wave velocity-depth (Vp-Z) models compare favorably with Vp-Z models derived from nearby, high-quality, multichannel seismic data. There are, however, substantial differences between seafloor reflection coefficients calculated from field models and seafloor reflection coefficients based on commonly used Vp regression curves (gradients). Reflection loss is higher at one field site and lower at the other than predicted from commonly used Vp gradients for terrigenous sediments. In addition, there are strong effects on reflection loss due to the subseafloor interfaces that are also not predicted by Vp gradients. PMID:15759701
Inversion of sonobuoy data from shallow-water sites with simulated annealing
NASA Astrophysics Data System (ADS)
Lindwall, Dennis; Brozena, John
2005-02-01
An enhanced simulated annealing algorithm is used to invert sparsely sampled seismic data collected with sonobuoys to obtain seafloor geoacoustic properties at two littoral marine environments as well as for a synthetic data set. Inversion of field data from a 750-m water-depth site using a water-gun sound source found a good solution which included a pronounced subbottom reflector after 6483 iterations over seven variables. Field data from a 250-m water-depth site using an air-gun source required 35 421 iterations for a good inversion solution because 30 variables had to be solved for, including the shot-to-receiver offsets. The sonobuoy derived compressional wave velocity-depth (Vp-Z) models compare favorably with Vp-Z models derived from nearby, high-quality, multichannel seismic data. There are, however, substantial differences between seafloor reflection coefficients calculated from field models and seafloor reflection coefficients based on commonly used Vp regression curves (gradients). Reflection loss is higher at one field site and lower at the other than predicted from commonly used Vp gradients for terrigenous sediments. In addition, there are strong effects on reflection loss due to the subseafloor interfaces that are also not predicted by Vp gradients.
Xiang, Hui; Ye, Yong; Ni, Linglin
2015-01-01
A stochastic multiproduct capacitated facility location problem involving a single supplier and multiple customers is investigated. Due to the stochastic demands, a reasonable amount of safety stock must be kept in the facilities to achieve suitable service levels, which results in increased inventory cost. Based on the assumption of normal distributed for all the stochastic demands, a nonlinear mixed-integer programming model is proposed, whose objective is to minimize the total cost, including transportation cost, inventory cost, operation cost, and setup cost. A combined simulated annealing (CSA) algorithm is presented to solve the model, in which the outer layer subalgorithm optimizes the facility location decision and the inner layer subalgorithm optimizes the demand allocation based on the determined facility location decision. The results obtained with this approach shown that the CSA is a robust and practical approach for solving a multiple product problem, which generates the suboptimal facility location decision and inventory policies. Meanwhile, we also found that the transportation cost and the demand deviation have the strongest influence on the optimal decision compared to the others. PMID:25834839
A hierarchical exact accelerated stochastic simulation algorithm
Orendorff, David; Mjolsness, Eric
2012-01-01
A new algorithm, “HiER-leap” (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled “blocks” and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms. PMID:23231214
Acoustic simulation in architecture with parallel algorithm
NASA Astrophysics Data System (ADS)
Li, Xiaohong; Zhang, Xinrong; Li, Dan
2004-03-01
In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.
Stochastic annealing simulation of copper under neutron irradiation
Heinisch, H.L.; Singh, B.N.
1998-03-01
This report is a summary of a presentation made at ICFRM-8 on computer simulations of defect accumulation during irradiation of copper to low doses at room temperature. The simulation results are in good agreement with experimental data on defect cluster densities in copper irradiated in RTNS-II.
The systems biology simulation core algorithm
2013-01-01
Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941
Obstacle Bypassing in Optimal Ship Routing Using Simulated Annealing
Kosmas, O. T.; Vlachos, D. S.; Simos, T. E.
2008-11-06
In this paper we are going to discuss a variation on the problem of finding the shortest path between two points in optimal ship routing problems consisting of obstacles that are not allowed to be crossed by the path. Our main goal are going to be the construction of an appropriate algorithm, based in an earlier work by computing the shortest path between two points in the plane that avoids a set of polygonal obstacles.
Obstacle Bypassing in Optimal Ship Routing Using Simulated Annealing
NASA Astrophysics Data System (ADS)
Kosmas, O. T.; Vlachos, D. S.; Simos, T. E.
2008-11-01
In this paper we are going to discuss a variation on the problem of finding the shortest path between two points in optimal ship routing problems consisting of obstacles that are not allowed to be crossed by the path. Our main goal are going to be the construction of an appropriate algorithm, based in an earlier work [16] by computing the shortest path between two points in the plane that avoids a set of polygonal obstacles.
Fast and robust microseismic event detection using very fast simulated annealing
NASA Astrophysics Data System (ADS)
Velis, Danilo R.; Sabbione, Juan I.; Sacchi, Mauricio D.
2013-04-01
The study of microseismic data has become an essential tool in many geoscience fields, including oil reservoir geophysics, mining and CO2 sequestration. In hydraulic fracturing, microseismicity studies permit the characterization and monitoring of the reservoir dynamics in order to optimize the production and the fluid injection process itself. As the number of events is usually large and the signal-to-noise ratio is in general very low, fast, automated, and robust detection algorithms are required for most applications. Also, real-time functionality is commonly needed to control the fluid injection in the field. Generally, events are located by means of grid search algorithms that rely on some approximate velocity model. These techniques are very effective and accurate, but computationally intensive when dealing with large three or four-dimensional grids. Here, we present a fast and robust method that allows to automatically detect and pick an event in 3C microseismic data without any input information about the velocity model. The detection is carried out by means of a very fast simulated annealing (VFSA) algorithm. To this end, we define an objective function that measures the energy of a potential microseismic event along the multichannel signal. This objective function is based on the stacked energy of the envelope of the signals calculated within a predefined narrow time window that depends on the source position, receivers geometry and velocity. Once an event has been detected, the source location can be estimated, in a second stage, by inverting the corresponding traveltimes using a standard technique, which would naturally require some knowledge of the velocity model. Since the proposed technique focuses on the detection of the microseismic events only, the velocity model is not required, leading to a fast algorithm that carries out the detection in real-time. Besides, the strategy is applicable to data with very low signal-to-noise ratios, for it relies
NASA Astrophysics Data System (ADS)
Mori, Takaharu; Okamoto, Yuko
2009-10-01
Gramicidin A is a linear hydrophobic 15-residue peptide which consists of alternating D- and L-amino acids and forms a unique tertiary structure, called the β6.3-helix, to act as a cation-selective ion channel in the natural conditions. In order to investigate the intrinsic ability of the gramicidin A monomer to form secondary structures, we performed the folding simulation of gramicidin A using a simulated annealing molecular dynamics (MD) method in vacuum mimicking the low-dielectric, homogeneous membrane environment. The initial conformation was a fully extended one. From the 200 different MD runs, we obtained a right-handed β4.4-helix as the lowest-potential-energy structure, and left-handed β4.4-helix, right-handed and left-handed β6.3-helix as local-minimum energy states. These results are in accord with those of the experiments of gramicidin A in homogeneous organic solvent. Our simulations showed a slight right-hand sense in the lower-energy conformations and a quite β-sheet-forming tendency throughout almost the entire sequence. In order to examine the stability of the obtained right-handed β6.3-helix and β4.4-helix structures in more realistic membrane environment, we have also performed all-atom MD simulations in explicit water, ion, and lipid molecules, starting from these β-helix structures. The results suggested that β6.3-helix is more stable than β4.4-helix in the inhomogeneous, explicit membrane environment, where the pore water and the hydrogen bonds between Trp side-chains and lipid-head groups have a role to further stabilize the β6.3-helix conformation.
Mori, Takaharu; Okamoto, Yuko
2009-10-28
Gramicidin A is a linear hydrophobic 15-residue peptide which consists of alternating D- and L-amino acids and forms a unique tertiary structure, called the beta(6.3)-helix, to act as a cation-selective ion channel in the natural conditions. In order to investigate the intrinsic ability of the gramicidin A monomer to form secondary structures, we performed the folding simulation of gramicidin A using a simulated annealing molecular dynamics (MD) method in vacuum mimicking the low-dielectric, homogeneous membrane environment. The initial conformation was a fully extended one. From the 200 different MD runs, we obtained a right-handed beta(4.4)-helix as the lowest-potential-energy structure, and left-handed beta(4.4)-helix, right-handed and left-handed beta(6.3)-helix as local-minimum energy states. These results are in accord with those of the experiments of gramicidin A in homogeneous organic solvent. Our simulations showed a slight right-hand sense in the lower-energy conformations and a quite beta-sheet-forming tendency throughout almost the entire sequence. In order to examine the stability of the obtained right-handed beta(6.3)-helix and beta(4.4)-helix structures in more realistic membrane environment, we have also performed all-atom MD simulations in explicit water, ion, and lipid molecules, starting from these beta-helix structures. The results suggested that beta(6.3)-helix is more stable than beta(4.4)-helix in the inhomogeneous, explicit membrane environment, where the pore water and the hydrogen bonds between Trp side-chains and lipid-head groups have a role to further stabilize the beta(6.3)-helix conformation. PMID:19894978
Machine Protection System algorithm compiler and simulator
White, G.R.; Sherwin, G.
1993-04-01
The Machine Protection System (MPS) component of the SLC`s beam selection system, in which integrated current is continuously monitored and limited to safe levels through careful selection and feedback of the beam repetition rate, is described elsewhere in these proceedings. The novel decision making mechanism by which that system can evaluate ``safe levels,`` and choose an appropriate repetition rate in real-time, is described here. The algorithm that this mechanism uses to make its decision is written in text files and expressed in states of the accelerator and its devices, one file per accelerator region. Before being used, a file is ``compiled`` to a binary format which can be easily processed as a forward-chaining decision tree. It is processed by distributed microcomputers local to the accelerator regions. A parent algorithm evaluates all results, and reports directly to the beam control microprocessor. Operators can test new algorithms, or changes they make to them, with an online graphical NPS simulator.
NASA Astrophysics Data System (ADS)
Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.
2015-12-01
Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could
Genetic Algorithms for Digital Quantum Simulations.
Las Heras, U; Alvarez-Rodriguez, U; Solano, E; Sanz, M
2016-06-10
We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors. PMID:27341220
Computational algorithms for simulations in atmospheric optics.
Konyaev, P A; Lukin, V P
2016-04-20
A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors. PMID:27140113
Hybrid General Pattern Search and Simulated Annealing for Industrail Production Planning Problems
NASA Astrophysics Data System (ADS)
Vasant, P.; Barsoum, N.
2010-06-01
In this paper, the hybridization of GPS (General Pattern Search) method and SA (Simulated Annealing) incorporated in the optimization process in order to look for the global optimal solution for the fitness function and decision variables as well as minimum computational CPU time. The real strength of SA approach been tested in this case study problem of industrial production planning. This is due to the great advantage of SA for being easily escaping from trapped in local minima by accepting up-hill move through a probabilistic procedure in the final stages of optimization process. Vasant [1] in his Ph. D thesis has provided 16 different techniques of heuristic and meta-heuristic in solving industrial production problems with non-linear cubic objective functions, eight decision variables and 29 constraints. In this paper, fuzzy technological problems have been solved using hybrid techniques of general pattern search and simulated annealing. The simulated and computational results are compared to other various evolutionary techniques.
Delay Times From Clustered Multi-Channel Cross Correlation and Simulated Annealing
NASA Astrophysics Data System (ADS)
Creager, K. C.; Sambridge, M. S.
2004-12-01
Several techniques exist to estimate relative delay times of seismic phases based on the assumption that the waveforms observed at several stations can be expressed as a common waveform that has been time shifted and distorted by random uncorrelated noise. We explore the more general problem of estimating the relative delay times for regional or even global distributions of seismometers in cases where waveforms vary systematically across the array. The estimation of relative delay times is formulated as a global optimization of the weighted sum of squares of cross correlations of each seismogram pair evaluated at the corresponding difference in their relative delay times. As there are many local minima in this penalty function, a simulated annealing algorithm is used to obtain a solution. The weights depend strongly on the separation distance among seismogram pairs as well as a measure of the similarity of waveforms. Thus, seismograph pairs that are physically close to each other and have similar waveforms are expected to be well aligned while those with dissimilar waveforms or large separation distances are severely down-weighted and thus need not be well aligned. As a result noisy seismograms, which are not similar to other seismograms, are down-weighted so they do not adversely effect the relative delay times of other seismograms. Finally, natural clusters of seismograms are determined from the weight matrix. Examples of aligning a few hundred P and PKP waveforms from a broadband global array and from a mixed broadband and short-period continental-scale array will be shown. While this method has applications in many situations, it may be especially useful for arrays such as the EarthScope Bigfoot Array.
NASA Astrophysics Data System (ADS)
Wrożyna, Andrzej; Pernach, Monika; Kuziak, Roman; Pietrzyk, Maciej
2016-04-01
Due to their exceptional strength properties combined with good workability the Advanced High-Strength Steels (AHSS) are commonly used in automotive industry. Manufacturing of these steels is a complex process which requires precise control of technological parameters during thermo-mechanical treatment. Design of these processes can be significantly improved by the numerical models of phase transformations. Evaluation of predictive capabilities of models, as far as their applicability in simulation of thermal cycles thermal cycles for AHSS is considered, was the objective of the paper. Two models were considered. The former was upgrade of the JMAK equation while the latter was an upgrade of the Leblond model. The models can be applied to any AHSS though the examples quoted in the paper refer to the Dual Phase (DP) steel. Three series of experimental simulations were performed. The first included various thermal cycles going beyond limitations of the continuous annealing lines. The objective was to validate models behavior in more complex cooling conditions. The second set of tests included experimental simulations of the thermal cycle characteristic for the continuous annealing lines. Capability of the models to describe properly phase transformations in this process was evaluated. The third set included data from the industrial continuous annealing line. Validation and verification of models confirmed their good predictive capabilities. Since it does not require application of the additivity rule, the upgrade of the Leblond model was selected as the better one for simulation of industrial processes in AHSS production.
NASA Astrophysics Data System (ADS)
Afanasiev, M.; Pratt, R. G.; Kamei, R.; McDowell, G.
2012-12-01
Crosshole seismic tomography has been used by Vale to provide geophysical images of mineralized massive sulfides in the Eastern Deeps deposit at Voisey's Bay, Labrador, Canada. To date, these data have been processed using traveltime tomography, and we seek to improve the resolution of these images by applying acoustic Waveform Tomography. Due to the computational cost of acoustic waveform modelling, local descent algorithms are employed in Waveform Tomography; due to non-linearity an initial model is required which predicts first-arrival traveltimes to within a half-cycle of the lowest frequency used. Because seismic velocity anisotropy can be significant in hardrock settings, the initial model must quantify the anisotropy in order to meet the half-cycle criterion. In our case study, significant velocity contrasts between the target massive sulfides and the surrounding country rock led to difficulties in generating an accurate anisotropy model through traveltime tomography, and our starting model for Waveform Tomography failed the half-cycle criterion at large offsets. We formulate a new, semi-global approach for finding the best-fit 1-D elliptical anisotropy model using simulated annealing. Through random perturbations to Thompson's ɛ parameter, we explore the L2 norm of the frequency-domain phase residuals in the space of potential anisotropy models: If a perturbation decreases the residuals, it is always accepted, but if a perturbation increases the residuals, it is accepted with the probability P = exp(-(Ei-E)/T). This is the Metropolis criterion, where Ei is the value of the residuals at the current iteration, E is the value of the residuals for the previously accepted model, and T is a probability control parameter, which is decreased over the course of the simulation via a preselected cooling schedule. Convergence to the global minimum of the residuals is guaranteed only for infinitely slow cooling, but in practice good results are obtained from a variety
Fast computation algorithms for speckle pattern simulation
Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru
2013-11-13
We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.
Optimizing the natural connectivity of scale-free networks using simulated annealing
NASA Astrophysics Data System (ADS)
Duan, Boping; Liu, Jing; Tang, Xianglong
2016-09-01
In real-world networks, the path between two nodes always plays a significant role in the fields of communication or transportation. In some cases, when one path fails, the two nodes cannot communicate any more. Thus, it is necessary to increase alternative paths between nodes. In the recent work (Wu et al., 2011), Wu et al. proposed the natural connectivity as a novel robustness measure of complex networks. The natural connectivity considers the redundancy of alternative paths in a network by computing the number of closed paths of all lengths. To enhance the robustness of networks in terms of the natural connectivity, in this paper, we propose a simulated annealing method to optimize the natural connectivity of scale-free networks without changing the degree distribution. The experimental results show that the simulated annealing method clearly outperforms other local search methods.
Proposal of a checking parameter in the simulated annealing method applied to the spin glass model
NASA Astrophysics Data System (ADS)
Yamaguchi, Chiaki
2016-02-01
We propose a checking parameter utilizing the breaking of the Jarzynski equality in the simulated annealing method using the Monte Carlo method. This parameter is based on the Jarzynski equality. By using this parameter, to detect that the system is in global minima of the free energy under gradual temperature reduction is possible. Thus, by using this parameter, one is able to investigate the efficiency of annealing schedules. We apply this parameter to the ± J Ising spin glass model. The application to the Gaussian Ising spin glass model is also mentioned. We discuss that the breaking of the Jarzynski equality is induced by the system being trapped in local minima of the free energy. By performing Monte Carlo simulations of the ± J Ising spin glass model and a glassy spin model proposed by Newman and Moore, we show the efficiency of the use of this parameter.
Multiperiod cellular network design via price-influenced simulated annealing (PISA).
Menon, Syam; Amiri, Ali
2006-06-01
Cellular telecommunications systems tend to be more flexible than traditional ones. As a result, traditional approaches to telecommunications network design are often inappropriate for the design of cellular networks, and approaches that explicitly incorporate the increased flexibility into the design process need to be developed. This paper presents one such multiperiod cellular network design problem and solves it via a hybrid heuristic that incorporates ideas from linear programming (LP) and simulated annealing (SA). Extensive computational results comparing the performance of the heuristic with the lower bound obtained from the LP relaxation are presented. These results indicate that this price-influenced simulated annealing (PISA) procedure is extremely efficient, consistently providing solutions with average gaps of 0.30% or less in fewer than 30 s. PMID:16761813
Minimizing distortion and internal forces in truss structures by simulated annealing
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.; Padula, Sharon L.
1990-01-01
Inaccuracies in the length of members and the diameters of joints of large space structures may produce unacceptable levels of surface distortion and internal forces. Here, two discrete optimization problems are formulated, one to minimize surface distortion (DSQRMS) and the other to minimize internal forces (FSQRMS). Both of these problems are based on the influence matrices generated by a small-deformation linear analysis. Good solutions are obtained for DSQRMS and FSQRMS through the use of a simulated annealing heuristic.
Application of fast simulated annealing to optimization of conformal radiation treatments.
Mageras, G S; Mohan, R
1993-01-01
Applications of simulated annealing to the optimization of radiation treatment plans, in which a set of beam weights are iteratively adjusted so as to minimize a cost function, have been motivated by its potential for finding the global or near-global minimum among multiple minima. However, the method has been found to be slow, requiring several tens of thousands of iterations to optimize 50 to 100 variables. A technique to improve the efficiency for finding a solution is reported, which is generally applicable to the optimization of continuous variables. In previous applications of simulated annealing to treatment planning optimization, only one or two weights are varied each iteration. This approach is to change all weights simultaneously, using random changes that are initially large to coarsely sample the cost function, then are reduced with iteration to probe finer structure. The performance of different methods are compared in optimizing a plan for treatment of the prostate, in which the search space consists of 54 noncoplanar beams and the cost function is based on tumor control and normal tissue complication probabilities. The proposed method yields solutions with similar values of the cost function in only a fraction of the iterations compared either to a fixed single weight adjustment technique, or to a method which combines the Nelder and Mead downhill simplex simulated annealing. PMID:8350815
NASA Astrophysics Data System (ADS)
Ry, Rexha Verdhora; Nugraha, Andri Dian
2015-04-01
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger's method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger's result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.
Validation of Sensor-Directed Spatial Simulated Annealing Soil Sampling Strategy.
Scudiero, Elia; Lesch, Scott M; Corwin, Dennis L
2016-07-01
Soil spatial variability has a profound influence on most agronomic and environmental processes at field and landscape scales, including site-specific management, vadose zone hydrology and transport, and soil quality. Mobile sensors are a practical means of mapping spatial variability because their measurements serve as a proxy for many soil properties, provided a sensor-soil calibration is conducted. A viable means of calibrating sensor measurements over soil properties is through linear regression modeling of sensor and target property data. In the present study, two sensor-directed, model-based, sampling scheme delineation methods were compared to validate recent applications of soil apparent electrical conductivity (EC)-directed spatial simulated annealing against the more established EC-directed response surface sampling design (RSSD) approach. A 6.8-ha study area near San Jacinto, CA, was surveyed for EC, and 30 soil sampling locations per sampling strategy were selected. Spatial simulated annealing and RSSD were compared for sensor calibration to a target soil property (i.e., salinity) and for evenness of spatial coverage of the study area, which is beneficial for mapping nontarget soil properties (i.e., those not correlated with EC). The results indicate that the linear modeling EC-salinity calibrations obtained from the two sampling schemes provided salinity maps characterized by similar errors. The maps of nontarget soil properties show similar errors across sampling strategies. The Spatial Simulated Annealing methodology is, therefore, validated, and its use in agronomic and environmental soil science applications is justified. PMID:27380070
Ry, Rexha Verdhora; Nugraha, Andri Dian
2015-04-24
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.
Parallel algorithm strategies for circuit simulation.
Thornquist, Heidi K.; Schiek, Richard Louis; Keiter, Eric Richard
2010-01-01
Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. However, they have been pushed to their performance limits in addressing circuit design challenges that come from the technology drivers of smaller feature scales and higher integration. Improving the performance of circuit simulation tools through exploiting new opportunities in widely-available multi-processor architectures is a logical next step. Unfortunately, not all traditional simulation applications are inherently parallel, and quickly adapting mature application codes (even codes designed to parallel applications) to new parallel paradigms can be prohibitively difficult. In general, performance is influenced by many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, the use of mini-applications small self-contained proxies for real applications is an excellent approach for rapidly exploring the parameter space of all these choices. In this report we present a multi-core performance study of Xyce, a transistor-level circuit simulation tool, and describe the future development of a mini-application for circuit simulation.
Assessment of a fuzzy based flood forecasting system optimized by simulated annealing
NASA Astrophysics Data System (ADS)
Reyhani Masouleh, Aida; Pakosch, Sabine; Disse, Markus
2010-05-01
Flood forecasting is an important tool to mitigate harmful effects of floods. Among the many different approaches for forecasting, Fuzzy Logic (FL) is one that has been increasingly applied over the last decade. This method is principally based on the linguistic description of Rule Systems (RS). A RS is a specific combination of membership functions of input and output variables. Setting up the RS can be implemented either automatically or manually, the choice of which can strongly influence the resulting rule systems. It is therefore the objective of this study to assess the influence that the parameters of an automated rule generation based on Simulated Annealing (SA) have on the resulting RS. The study area is the upper Main River area, located in the northern part of Bavaria, Germany. The data of Mainleus gauge with area of 1165 km2 was investigated in the whole period of 1984 and 2004. The highest observed discharge of 357 m3/s was recorded in 1995. The input arguments of the FL model were daily precipitation, forecasted precipitation, antecedent precipitation index, temperature and melting rate. The FL model of this study has one output variable, daily discharge and was independently set up for three different forecast lead times, namely one-, two- and three-days ahead. In total, each RS comprised 55 rules and all input and output variables were represented by five sets of trapezoidal and triangular fuzzy numbers. Simulated Annealing, which is a converging optimum solution algorithm, was applied for optimizing the RSs in this study. In order to assess the influence of its parameters (number of iterations, temperature decrease rate, initial value for generating random numbers, initial temperature and two other parameters), they were individually varied while keeping the others fixed. With each of the resulting parameter sets, a full-automatic SA was applied to gain optimized fuzzy rule systems for flood forecasting. Evaluation of the performance of the
Efficient algorithms for wildland fire simulation
NASA Astrophysics Data System (ADS)
Kondratenko, Volodymyr Y.
In this dissertation, we develop the multiple-source shortest path algorithms and examine their application importance in real world problems, such as wildfire modeling. The theoretical basis and its implementation in the Weather Research Forecasting (WRF) model coupled with the fire spread code SFIRE (WRF-SFIRE model) are described. We present a data assimilation method that gives the fire spread model the ability to start the fire simulation from an observed fire perimeter instead of an ignition point. While the model is running, the fire state in the model changes in accordance with the new arriving data by data assimilation. As the fire state changes, the atmospheric state (which is strongly effected by heat flux) does not stay consistent with the fire state. The main difficulty of this methodology occurs in coupled fire-atmosphere models, because once the fire state is modified to match a given starting perimeter, the atmospheric circulation is no longer in sync with it. One of the possible solutions to this problem is a formation of the artificial time of ignition history from an earlier fire state, which is later used to replay the fire progression to the new perimeter with the proper heat fluxes fed into the atmosphere, so that the fire induced circulation is established. In this work, we develop efficient algorithms that start from the fire arrival times given at the set of points (called a perimeter) and create the artificial fire time of ignition and fire spread rate history. Different algorithms were developed in order to suit possible demands of the user, such as implementation in parallel programming, minimization of the required amount of iterations and memory use, and use of the rate of spread as a time dependent variable. For the algorithms that deal with the homogeneous rate of spread, it was proven that the values of fire arrival times they produce are optimal. It was also shown that starting from arbitrary initial state the algorithms have
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Computer simulations of randomly branching polymers: annealed versus quenched branching structures
NASA Astrophysics Data System (ADS)
Rosa, Angelo; Everaers, Ralf
2016-08-01
We present computer simulations of three systems of randomly branching polymers in d = 3 dimensions: ideal trees and self-avoiding trees with annealed and quenched connectivities. In all cases, we performed a detailed analysis of trees connectivities, spatial conformations and statistical properties of linear paths on trees, and compare the results to the corresponding predictions of Flory theory. We confirm that, overall, the theory predicts correctly that trees with quenched ideal connectivity exhibit less overall swelling in good solvent than corresponding trees with annealed connectivity even though they are more strongly stretched on the path level. At the same time, we emphasize the inadequacy of the Flory theory in predicting the behaviour of other, and equally relevant, observables like contact probabilities between tree nodes. We show, then, that contact probabilities can be aptly characterized by introducing a novel critical exponent, {θ }{path}, which accounts for how they decay as a function of the node-to-node path distance on the tree.
Clutter discrimination algorithm simulation in pulse laser radar imaging
NASA Astrophysics Data System (ADS)
Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule
2015-10-01
Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.
An advanced dispatch simulator with advanced dispatch algorithm
Kafka, R.J. ); Fink, L.H. ); Balu, N.J. ); Crim, H.G. )
1989-01-01
This paper reports on an interactive automatic generation control (AGC) simulator. Improved and timely information regarding fossil fired plant performance is potentially useful in the economic dispatch of system generating units. Commonly used economic dispatch algorithms are not able to take full advantage of this information. The dispatch simulator was developed to test and compare economic dispatch algorithms which might be able to show improvement over standard economic dispatch algorithms if accurate unit information were available. This dispatch simulator offers substantial improvements over previously available simulators. In addition, it contains an advanced dispatch algorithm which shows control and performance advantages over traditional dispatch algorithms for both plants and electric systems.
Feedback algorithm for simulation of multi-segmented cracks
Chady, T.; Napierala, L.
2011-06-23
In this paper, a method for obtaining a three dimensional crack model from a radiographic image is discussed. A genetic algorithm aiming at close simulation of crack's shape is presented. Results obtained with genetic algorithm are compared to those achieved in authors' previous work. The described algorithm has been tested on both simulated and real-life cracks.
Efficient algorithms for distributed simulation and related problems
Kumar, D.
1987-01-01
This thesis presents efficient algorithms for distributed simulation, and for the related problems of termination detection and sequential simulation. Distributed simulation algorithms applicable to the simulation of special classes of systems, such that almost no overhead messages are required are presented. By contrast, previous distributed simulation algorithms, although applicable to the general class of any discrete-event system, usually require too many overhead messages. First, a simple distributed simulation algorithm is defined with nearly zero overhead messages for simulating feedforward systems. An approximate method is developed to predict its performance in simulating a class of feedforward-queuing networks. Performance of the scheme is evaluated in simulating specific subclasses of these queuing networks. It is shown that the scheme offers a high performance for serial-parallel networks. Next, another distributed simulation scheme is defined for a class of distributed systems whose topologies may have cycles. One important problem in devising distributed simulation algorithms is that of efficient detection of termination. With this in mind, a class of termination-detection algorithms using markers is devised. Finally, a new sequential simulation algorithm is developed, based on a distributed one. This algorithm often reduces the event-list manipulations of traditional-event list-driven simulation.
Simulation results for the Viterbi decoding algorithm
NASA Technical Reports Server (NTRS)
Batson, B. H.; Moorehead, R. W.; Taqvi, S. Z. H.
1972-01-01
Concepts involved in determining the performance of coded digital communications systems are introduced. The basic concepts of convolutional encoding and decoding are summarized, and hardware implementations of sequential and maximum likelihood decoders are described briefly. Results of parametric studies of the Viterbi decoding algorithm are summarized. Bit error probability is chosen as the measure of performance and is calculated, by using digital computer simulations, for various encoder and decoder parameters. Results are presented for code rates of one-half and one-third, for constraint lengths of 4 to 8, for both hard-decision and soft-decision bit detectors, and for several important systematic and nonsystematic codes. The effect of decoder block length on bit error rate also is considered, so that a more complete estimate of the relationship between performance and decoder complexity can be made.
NASA Astrophysics Data System (ADS)
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2015-07-01
The results of object kinetic Monte Carlo (OKMC) simulations of the annealing of primary cascade damage in bulk tungsten using a comprehensive database of cascades obtained from molecular dynamics (Setyawan et al.) are described as a function of primary knock-on atom (PKA) energy at temperatures of 300, 1025 and 2050 K. An increase in SIA clustering coupled with a decrease in vacancy clustering with increasing temperature, in addition to the disparate mobilities of SIAs versus vacancies, causes an interesting effect of temperature on cascade annealing. The annealing efficiency (the ratio of the number of defects after and before annealing) exhibits an inverse U-shape curve as a function of temperature. The capabilities of the newly developed OKMC code KSOME (kinetic simulations of microstructure evolution) used to carry out these simulations are described.
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2015-07-01
The results of object kinetic Monte Carlo (OKMC) simulations of the annealing of primary cascade damage in bulk tungsten using a comprehensive database of cascades obtained from molecular dynamics (Setyawan et al.) are described as a function of primary knock-on atom (PKA) energy at temperatures of 300, 1025 and 2050 K. An increase in SIA clustering coupled with a decrease in vacancy clustering with increasing temperature, in addition to the disparate mobilities of SIAs versus vacancies, causes an interesting effect of temperature on cascade annealing. The annealing efficiency (the ratio of the number of defects after and before annealing) exhibits an inverse U-shape curve as a function of temperature. The capabilities of the newly developed OKMC code KSOME (kinetic simulations of microstructure evolution) used to carry out these simulations are described.
NASA Astrophysics Data System (ADS)
Kumar, Pushpendra; Huber, Patrick
2016-04-01
Discovery of porous silicon formation in silicon substrate in 1956 while electro-polishing crystalline Si in hydrofluoric acid (HF), has triggered large scale investigations of porous silicon formation and their changes in physical and chemical properties with thermal and chemical treatment. A nitrogen sorption study is used to investigate the effect of thermal annealing on electrochemically etched mesoporous silicon (PS). The PS was thermally annealed from 200˚C to 800˚C for 1 hr in the presence of air. It was shown that the pore diameter and porosity of PS vary with annealing temperature. The experimentally obtained adsorption / desorption isotherms show hysteresis typical for capillary condensation in porous materials. A simulation study based on Saam and Cole model was performed and compared with experimentally observed sorption isotherms to study the physics behind of hysteresis formation. We discuss the shape of the hysteresis loops in the framework of the morphology of the layers. The different behavior of adsorption and desorption of nitrogen in PS with pore diameter was discussed in terms of concave menisci formation inside the pore space, which was shown to related with the induced pressure in varying the pore diameter from 7.2 nm to 3.4 nm.
Application of genetic algorithms to autopiloting in aerial combat simulation
NASA Astrophysics Data System (ADS)
Kim, Dai Hyun; Erwin, Daniel A.; Kostrzewski, Andrew A.; Kim, Jeongdal; Savant, Gajendra D.
1998-10-01
An autopilot algorithm that controls a fighter aircraft in simulated aerial combat is presented. A fitness function, whose arguments are the control settings of the simulated fighter, is continuously maximized by a fuzzied genetic algorithm. Results are presented for one-to-one combat simulated on a personal computer. Generalization to many-to-many combat is discussed.
A parallel algorithm for implicit depletant simulations
NASA Astrophysics Data System (ADS)
Glaser, Jens; Karas, Andrew S.; Glotzer, Sharon C.
2015-11-01
We present an algorithm to simulate the many-body depletion interaction between anisotropic colloids in an implicit way, integrating out the degrees of freedom of the depletants, which we treat as an ideal gas. Because the depletant particles are statistically independent and the depletion interaction is short-ranged, depletants are randomly inserted in parallel into the excluded volume surrounding a single translated and/or rotated colloid. A configurational bias scheme is used to enhance the acceptance rate. The method is validated and benchmarked both on multi-core processors and graphics processing units for the case of hard spheres, hemispheres, and discoids. With depletants, we report novel cluster phases in which hemispheres first assemble into spheres, which then form ordered hcp/fcc lattices. The method is significantly faster than any method without cluster moves and that tracks depletants explicitly, for systems of colloid packing fraction ϕc < 0.50, and additionally enables simulation of the fluid-solid transition.
Barakat, M T; Dean, P M
1990-09-01
This paper considers some of the landscape problems encountered in matching molecules by simulated annealing. Although the method is in theory ergodic, the global minimum in the objective function is not always encountered. Factors inherent in the molecular data that lead the trajectory of the minimization away from its optimal route are analysed. Segments comprised of the C alpha atoms of dihydrofolate reductase are used as test data. The evolution of a reverse ordering landscape problem is examined in detail. Where such patterns in the data could lead to incorrect matches, the problem can in part be circumvented by assigning an initial random ordering to the molecules. PMID:2280267
Neighbourhood generation mechanism applied in simulated annealing to job shop scheduling problems
NASA Astrophysics Data System (ADS)
Cruz-Chávez, Marco Antonio
2015-11-01
This paper presents a neighbourhood generation mechanism for the job shop scheduling problems (JSSPs). In order to obtain a feasible neighbour with the generation mechanism, it is only necessary to generate a permutation of an adjacent pair of operations in a scheduling of the JSSP. If there is no slack time between the adjacent pair of operations that is permuted, then it is proven, through theory and experimentation, that the new neighbour (schedule) generated is feasible. It is demonstrated that the neighbourhood generation mechanism is very efficient and effective in a simulated annealing.
NASA Astrophysics Data System (ADS)
Piazolo, Sandra; Montagnat, Maurine; Prakash, Abhishek; Borthwick, Verity; Evans, Lynn; Griera, Albert; Bons, Paul D.; Svahnberg, Henrik; Prior, David J.
2015-04-01
Understanding the influence of the pre-existing microstructure on subsequent microstructural development is pivotal for the correct interpretation of rocks and ice that stayed at high homologous temperatures over a significant period of time. The microstructural behaviour of these materials through time has an important bearing on the interpretation of characteristics such as grain size, for example, using grain size statistics to detect former high strain zones that remain at high temperatures but low stress. We present a coupled experimental and modelling approach to better understand the evolution of recrystallization characteristics as a function of deformation-annealing time paths in a material with a high viscoplastic anisotropy e.g. polycrystalline ice and magnesium alloys. Deformation microstructures such as crystal bending, subgrain boundaries, grain size variation significantly influence the deformation and annealing behaviour of crystalline material. For numerical simulations we utilize the microdynamic modelling platform, Elle (www.elle.ws), taking local microstructural evolution into account to simulate the following processes: recovery within grains, rotational recrystallization, grain boundary migration and nucleation. We first test the validity of the numerical simulations against experiments, and then use the model to interpret microstructural features in natural examples. In-situ experiments are performed on laboratory grown and deformed ice and magnesium alloy. Our natural example is a deformed then recrystallized anorthosite from SW Greenland. The presented approach can be applied to many other minerals and crystalline materials.
NASA Astrophysics Data System (ADS)
Mori, Takaharu; Okamoto, Yuko
2008-03-01
Gramicidin A is a hydrophobic 15-residue peptide with alternating D- and L-amino acids, and it forms various conformations depending on its environment. For example, gramicidin A adopts a random coil or helical conformations, such as &4.4circ;-helix, &6.3circ;-helix, and double-stranded helix in organic solvents. To investigate the structural and dynamical properties of gramicidin A in water and the hydrophobic environment, we performed molecular dynamics simulated annealing simulations with implicit solvent based on a generalized Born model. From the simulations, it was found that gramicidin A has a strong tendency to form a random-coil structure in water, while in the hydrophobic environment it becomes compact and can fold into right- and left-handed conformations of β-helix structures. We discuss the folding mechanism of the β-helix conformation of gramicidin A.
NASA Astrophysics Data System (ADS)
Afanasiev, Michael V.; Pratt, R. Gerhard; Kamei, Rie; McDowell, Glenn
2014-12-01
We successfully apply the semi-global inverse method of simulated annealing to determine the best-fitting 1-D anisotropy model for use in acoustic frequency domain waveform tomography. Our forward problem is based on a numerical solution of the frequency domain acoustic wave equation, and we minimize wavefield phase residuals through random perturbations to a 1-D vertically varying anisotropy profile. Both real and synthetic examples are presented in order to demonstrate and validate the approach. For the real data example, we processed and inverted a cross-borehole data set acquired by Vale Technology Development (Canada) Ltd. in the Eastern Deeps deposit, located in Voisey's Bay, Labrador, Canada. The inversion workflow comprises the full suite of acquisition, data processing, starting model building through traveltime tomography, simulated annealing and finally waveform tomography. Waveform tomography is a high resolution method that requires an accurate starting model. A cycle-skipping issue observed in our initial starting model was hypothesized to be due to an erroneous anisotropy model from traveltime tomography. This motivated the use of simulated annealing as a semi-global method for anisotropy estimation. We initially tested the simulated annealing approach on a synthetic data set based on the Voisey's Bay environment; these tests were successful and led to the application of the simulated annealing approach to the real data set. Similar behaviour was observed in the anisotropy models obtained through traveltime tomography in both the real and synthetic data sets, where simulated annealing produced an anisotropy model which solved the cycle-skipping issue. In the real data example, simulated annealing led to a final model that compares well with the velocities independently estimated from borehole logs. By comparing the calculated ray paths and wave paths, we attributed the failure of anisotropic traveltime tomography to the breakdown of the ray
NASA Astrophysics Data System (ADS)
Florakis, A.; Vandervorst, W.; Janssens, T.; Rosseel, E.; Douhard, B.; Delmotte, J.; Cornagliotti, E.; Baert, K.; Posthuma, N.; Poortmans, J.
2012-11-01
The use of ion implantation to form local doping profiles in solar cells has regained interest, as it simplifies the manufacturing process, is litho-free, and high efficiency levels can be obtained [1]. However, residual damage after post annealing can lead to increased saturation current values (J0e), so optimization of the annealing process is required. A full simulation of the anneal treatment and its impact on profile and oxide formation was developed, based on the Synopsys Process TCAD software, for the boron doping case. The validity of the models and parameter set was experimentally verified through SIMS (Secondary Ion Mass Spectroscopy) profiles, sheet resistance and ellipsometry measurements.
The Use of Simulated Annealing in Chromosome Reconstruction Experiments Based on Binary Scoring
Cuticchia, A. J.; Arnold, J.; Timberlake, W. E.
1992-01-01
We present a method of combinatorial optimization, simulated annealing, to order clones in a library with respect to their position along a chromosome. This ordering method relies on scoring each clone for the presence or absence of specific target sequences, thereby assigning a digital signature to each clone. Specifically, we consider the hybridization of oligonucleotide probes to a clone to constitute the signature. In that the degree of clonal overlap is reflected in the similarity of their signatures, it is possible to construct maps based on the minimization of the differences in signatures across a reconstructed chromosome. Our simulations show that with as few as 30 probes and a clonal density of 4.5 genome equivalents, it is possible to assemble a small eukaryotic chromosome into 33 contiguous blocks of clones (contigs). With higher clonal densities and more probes, this number can be reduced to less than 5 contigs per chromosome. PMID:1427046
A simulation algorithm for ultrasound liver backscattered signals.
Zatari, D; Botros, N; Dunn, F
1995-11-01
In this study, we present a simulation algorithm for the backscattered ultrasound signal from liver tissue. The algorithm simulates backscattered signals from normal liver and three different liver abnormalities. The performance of the algorithm has been tested by statistically comparing the simulated signals with corresponding signals obtained from a previous in vivo study. To verify that the simulated signals can be classified correctly we have applied a classification technique based on an artificial neural network. The acoustic features extracted from the spectrum over a 2.5 MHz bandwidth are the attenuation coefficient and the change of speed of sound with frequency (dispersion). Our results show that the algorithm performs satisfactorily. Further testing of the algorithm is conducted by the use of a data acquisition and analysis system designed by the authors, where several simulated signals are stored in memory chips and classified according to their abnormalities. PMID:8560631
A simulated annealing approach to schedule optimization for the SES facility
NASA Technical Reports Server (NTRS)
Mcmahon, Mary Beth; Dean, Jack
1992-01-01
The Shuttle Engineering Simulator (SES) is a facility which houses the software and hardware for a variety of simulation systems. The simulators include the Autonomous Remote Manipulator, the Manned Maneuvering Unit, Orbiter/Space Station docking, and shuttle entry and landing. The SES simulators are used by various groups throughout NASA. For example, astronauts use the SES to practice maneuvers with the shuttle equipment; programmers use the SES to test flight software; and engineers use the SES for design and analysis studies. Due to its high demand, the SES is busy twenty-four hours a day and seven days a week. Scheduling the facility is a problem that is constantly growing and changing with the addition of new equipment. Currently a number of small independent programs have been developed to help solve the problem, but the long-term answer lies in finding a flexible, integrated system that provides the user with the ability to create, optimize, and edit the schedule. COMPASS is an interactive and highly flexible scheduling system. However, until recently COMPASS did not provide any optimization features. This paper describes the simulated annealing extension to COMPASS. It now allows the user to interweave schedule creation, revision, and optimization. This practical approach was necessary in order to satisfy the operational requirements of the SES.
Atmospheric channel for bistatic optical communication: simulation algorithms
NASA Astrophysics Data System (ADS)
Belov, V. V.; Tarasenkov, M. V.
2015-11-01
Three algorithms of statistical simulation of the impulse response (IR) for the atmospheric optical communication channel are considered, including algorithms of local estimate and double local estimate and the algorithm suggested by us. On the example of a homogeneous molecular atmosphere it is demonstrated that algorithms of double local estimate and the suggested algorithm are more efficient than the algorithm of local estimate. For small optical path length, the proposed algorithm is more efficient, and for large optical path length, the algorithm of double local estimate is more efficient. Using the proposed algorithm, the communication quality is estimated for a particular case of the atmospheric channel under conditions of intermediate turbidity. The communication quality is characterized by the maximum IR, time of maximum IR, integral IR, and bandwidth of the communication channel. Calculations of these criteria demonstrated that communication is most efficient when the point of intersection of the directions toward the source and the receiver is most close to the source point.
Pharmacokinetic modeling of dynamic MR images using a simulated annealing-based optimization
NASA Astrophysics Data System (ADS)
Sawant, Amit R.; Reece, John H.; Reddick, Wilburn E.
2000-04-01
The aim of this work was to use dynamic contrast enhanced MR image (DEMRI) data to generate 'parameter images' which provide functional information about contrast agent access, in bone sarcoma. A simulated annealing based technique was applied to optimize the parameters of a pharmacokinetic model used to describe the kinetics of the tissue response during and after intravenous infusion of a paramagnetic contrast medium, Gd-DTPA. Optimization was performed on a pixel by pixel basis so as to minimize the sum of square deviations of the calculated values from the values obtained experimentally during dynamic contrast enhanced MR imaging. A cost function based on a priori information was introduced during the annealing procedure to ensure that the values obtained were within the expected ranges. The optimized parameters were used in the model to generate parameter images, which reveal functional information that is normally not visible in conventional Gd-DTPA enhanced MR images. This functional information, during and upon completion of pre-operative chemotherapy, is useful in predicting the probability of disease free survival.
Simulated annealing in networks for computing possible arrangements for red and green cones
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1987-01-01
Attention is given to network models in which each of the cones of the retina is given a provisional color at random, and then the cones are allowed to determine the colors of their neighbors through an iterative process. A symmetric-structure spin-glass model has allowed arrays to be generated from completely random arrangements of red and green to arrays with approximately as much disorder as the parafoveal cones. Simulated annealing has also been added to the process in an attempt to generate color arrangements with greater regularity and hence more revealing moirepatterns than than the arrangements yielded by quenched spin-glass processes. Attention is given to the perceptual implications of these results.
Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei
2014-01-01
A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508
Prediction of cutting force in turning of UD-GFRP using mathematical model and simulated annealing
NASA Astrophysics Data System (ADS)
Gupta, Meenu; Gill, Surinder Kumar
2012-12-01
Glass fiber reinforced plastics (GFRPs) composite is considered to be an alternative to heavy exortic materials. According to the need for accurate machining of composites has increased enormously. During machining, the obtaining cutting force is an important aspect. The present investigation deals with the study and development of a cutting force prediction model for the machining of unidirectional glass fiber reinforced plastics (UD-GFRP) composite using regression modeling and optimization by simulated annealing. The process parameters considered include cutting speed, feed rate and depth of cut. The predicted values radial cutting force model is compared with the experimental values. The results of prediction are quite close with the experimental values. The influences of different parameters in machining of UD-GFRP composite have been analyzed.
On combining support vector machines and simulated annealing in stereovision matching.
Pajares, Gonzalo; de la Cruz, Jesús M
2004-08-01
This paper outlines a method for solving the stereovision matching problem using edge segments as the primitives. In stereovision matching, the following constraints are commonly used: epipolar, similarity, smoothness, ordering, and uniqueness. We propose a new strategy in which such constraints are sequentially combined. The goal is to achieve high performance in terms of correct matches by combining several strategies. The contributions of this paper are reflected in the development of a similarity measure through a support vector machines classification approach; the transformation of the smoothness, ordering and epipolar constraints into the form of an energy function, through an optimization simulated annealing approach, whose minimum value corresponds to a good matching solution and by introducing specific conditions to overcome the violation of the smoothness and ordering constraints. The performance of the proposed method is illustrated by comparative analysis against some recent global matching methods. PMID:15462432
Application of simulated annealing to solve multi-objectives for aggregate production planning
NASA Astrophysics Data System (ADS)
Atiya, Bayda; Bakheet, Abdul Jabbar Khudhur; Abbas, Iraq Tereq; Bakar, Mohd. Rizam Abu; Soon, Lee Lai; Monsi, Mansor Bin
2016-06-01
Aggregate production planning (APP) is one of the most significant and complicated problems in production planning and aim to set overall production levels for each product category to meet fluctuating or uncertain demand in future. and to set decision concerning hiring, firing, overtime, subcontract, carrying inventory level. In this paper, we present a simulated annealing (SA) for multi-objective linear programming to solve APP. SA is considered to be a good tool for imprecise optimization problems. The proposed model minimizes total production and workforce costs. In this study, the proposed SA is compared with particle swarm optimization (PSO). The results show that the proposed SA is effective in reducing total production costs and requires minimal time.
Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei
2014-01-01
A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508
Simulated annealing approach to vascular structure with application to the coronary arteries
Keelan, Jonathan; Chung, Emma M. L.; Hague, James P.
2016-01-01
Do the complex processes of angiogenesis during organism development ultimately lead to a near optimal coronary vasculature in the organs of adult mammals? We examine this hypothesis using a powerful and universal method, built on physical and physiological principles, for the determination of globally energetically optimal arterial trees. The method is based on simulated annealing, and can be used to examine arteries in hollow organs with arbitrary tissue geometries. We demonstrate that the approach can generate in silico vasculatures which closely match porcine anatomical data for the coronary arteries on all length scales, and that the optimized arterial trees improve systematically as computational time increases. The method presented here is general, and could in principle be used to examine the arteries of other organs. Potential applications include improvement of medical imaging analysis and the design of vascular trees for artificial organs. PMID:26998317
A splitting algorithm for Vlasov simulation with filamentation filtration
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Farrell, W. M.
1994-01-01
A Fourier-Fourier transformed version of the splitting algorithm for simulating solutions of the Vlasov-Poisson system of equations is introduced. It is shown that with the inclusion of filamentation filtration in this transformed algorithm it is both faster and more stable than the standard splitting algorithm. It is further shown that in a scalar computer environment this new algorithm is approximately equal in speed and far less noisy than its particle-in-cell counterpart. It is conjectured that in a multiprocessor environment the filtered splitting algorithm would be faster while producing more precise results.
Heavy Tails in the Distribution of Time to Solution for Classical and Quantum Annealing.
Steiger, Damian S; Rønnow, Troels F; Troyer, Matthias
2015-12-01
For many optimization algorithms the time to solution depends not only on the problem size but also on the specific problem instance and may vary by many orders of magnitude. It is then necessary to investigate the full distribution and especially its tail. Here, we analyze the distributions of annealing times for simulated annealing and simulated quantum annealing (by path integral quantum Monte Carlo simulation) for random Ising spin glass instances. We find power-law distributions with very heavy tails, corresponding to extremely hard instances, but far broader distributions-and thus worse performance for hard instances-for simulated quantum annealing than for simulated annealing. Fast, nonadiabatic, annealing schedules can improve the performance of simulated quantum annealing for very hard instances by many orders of magnitude. PMID:26684103
Adaptive mesh and algorithm refinement using direct simulation Monte Carlo
Garcia, A.L.; Bell, J.B.; Crutchfield, W.Y.; Alder, B.J.
1999-09-01
Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.
Duality quantum algorithm efficiently simulates open quantum systems
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-01-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855
Duality quantum algorithm efficiently simulates open quantum systems
NASA Astrophysics Data System (ADS)
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-07-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm.
Duality quantum algorithm efficiently simulates open quantum systems.
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-01-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d(3)) in contrast to O(d(4)) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855
Architecture and algorithm of a circuit simulator
NASA Astrophysics Data System (ADS)
Marranghello, Norian; Damiani, Furio
1990-11-01
Software-based circuit simulators had a ten-fold speed improvement in the last 15 years. Despite this they are not fast enough to cost- effectively deal with current VLSI circuits. In this paper we describe the current status of the ABACUS circuit simulator project, which takes advantage of both a dedicated hardware to speed up circuit simulation and a new methodology, where each parallel processor behaves like a circuit element.
A fast recursive algorithm for molecular dynamics simulation
NASA Technical Reports Server (NTRS)
Jain, A.; Vaidehi, N.; Rodriguez, G.
1993-01-01
The present recursive algorithm for solving molecular systems' dynamical equations of motion employs internal variable models that reduce such simulations' computation time by an order of magnitude, relative to Cartesian models. Extensive use is made of spatial operator methods recently developed for analysis and simulation of the dynamics of multibody systems. A factor-of-450 speedup over the conventional O(N-cubed) algorithm is demonstrated for the case of a polypeptide molecule with 400 residues.
3D face recognition using simulated annealing and the surface interpenetration measure.
Queirolo, Chauã C; Silva, Luciano; Bellon, Olga R P; Segundo, Maurício Pamplona
2010-02-01
This paper presents a novel automatic framework to perform 3D face recognition. The proposed method uses a Simulated Annealing-based approach (SA) for range image registration with the Surface Interpenetration Measure (SIM), as similarity measure, in order to match two face images. The authentication score is obtained by combining the SIM values corresponding to the matching of four different face regions: circular and elliptical areas around the nose, forehead, and the entire face region. Then, a modified SA approach is proposed taking advantage of invariant face regions to better handle facial expressions. Comprehensive experiments were performed on the FRGC v2 database, the largest available database of 3D face images composed of 4,007 images with different facial expressions. The experiments simulated both verification and identification systems and the results compared to those reported by state-of-the-art works. By using all of the images in the database, a verification rate of 96.5 percent was achieved at a False Acceptance Rate (FAR) of 0.1 percent. In the identification scenario, a rank-one accuracy of 98.4 percent was achieved. To the best of our knowledge, this is the highest rank-one score ever achieved for the FRGC v2 database when compared to results published in the literature. PMID:20075453
Radar simulation program upgrade and algorithm development
NASA Technical Reports Server (NTRS)
Britt, Charles L.
1991-01-01
The NASA Radar Simulation Program is a comprehensive calculation of the expected output of an airborne coherent pulse Doppler radar system viewing a low level microburst along or near the approach path. Inputs to the program include the radar system parameters and data files that contain the characteristics of the microbursts to be simulated, the ground clutter map, and the discrete target data base which provides a simulation of the moving ground clutter. For each range bin, the simulation calculates the received signal amplitude level by integrating the product of the antenna gain pattern and the scattering source amplitude and phase of a spherical shell volume segment defined by the pulse width, radar range, and ground plane intersection. A series of in-phase and quadrature pulses are generated and stored for further processing if desired. In addition, various signal processing techniques are used to derive the simulated velocity and hazard measurements, and store them for use in plotting and display programs.
An assessment of 'shuffle algorithm' collision mechanics for particle simulations
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Boyd, Iain D.
1991-01-01
Among the algorithms for collision mechanics used at present, the 'shuffle algorithm' of Baganoff (McDonald and Baganoff, 1988; Baganoff and McDonald, 1990) not only allows efficient vectorization, but also discretizes the possible outcomes of a collision. To assess the applicability of the shuffle algorithm, a simulation was performed of flows in monoatomic gases and the calculated characteristics of shock waves was compared with those obtained using a commonly employed isotropic scattering law. It is shown that, in general, the shuffle algorithm adequately represents the collision mechanics in cases when the goal of calculations are mean profiles of density and temperature.
Fully explicit algorithms for fluid simulation
NASA Astrophysics Data System (ADS)
Clausen, Jonathan
2011-11-01
Computing hardware is trending towards distributed, massively parallel architectures in order to achieve high computational throughput. For example, Intrepid at Argonne uses 163,840 cores, and next generation machines, such as Sequoia at Lawrence Livermore, will use over one million cores. Harnessing the increasingly parallel nature of computational resources will require algorithms that scale efficiently on these architectures. The advent of GPU-based computation will serve to accelerate this behavior, as a single GPU contains hundreds of processor ``cores.'' Explicit algorithms avoid the communication associated with a linear solve, thus parallel scalability of these algorithms is typically high. This work will explore the efficiency and accuracy of three explicit solution methodologies for the Navier-Stokes equations: traditional artificial compressibility schemes, the lattice-Boltzmann method, and the recently proposed kinetically reduced local Navier-Stokes equations [Borok, Ansumali, and Karlin (2007)]. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Extrapolated gradientlike algorithms for molecular dynamics and celestial mechanics simulations.
Omelyan, I P
2006-09-01
A class of symplectic algorithms is introduced to integrate the equations of motion in many-body systems. The algorithms are derived on the basis of an advanced gradientlike decomposition approach. Its main advantage over the standard gradient scheme is the avoidance of time-consuming evaluations of force gradients by force extrapolation without any loss of precision. As a result, the efficiency of the integration improves significantly. The algorithms obtained are analyzed and optimized using an error-function theory. The best among them are tested in actual molecular dynamics and celestial mechanics simulations for comparison with well-known nongradient and gradient algorithms such as the Störmer-Verlet, Runge-Kutta, Cowell-Numerov, Forest-Ruth, Suzuki-Chin, and others. It is demonstrated that for moderate and high accuracy, the extrapolated algorithms should be considered as the most efficient for the integration of motion in molecular dynamics simulations. PMID:17025782
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The
Mathematical foundation of quantum annealing
Morita, Satoshi; Nishimori, Hidetoshi
2008-12-15
Quantum annealing is a generic name of quantum algorithms that use quantum-mechanical fluctuations to search for the solution of an optimization problem. It shares the basic idea with quantum adiabatic evolution studied actively in quantum computation. The present paper reviews the mathematical and theoretical foundations of quantum annealing. In particular, theorems are presented for convergence conditions of quantum annealing to the target optimal state after an infinite-time evolution following the Schroedinger or stochastic (Monte Carlo) dynamics. It is proved that the same asymptotic behavior of the control parameter guarantees convergence for both the Schroedinger dynamics and the stochastic dynamics in spite of the essential difference of these two types of dynamics. Also described are the prescriptions to reduce errors in the final approximate solution obtained after a long but finite dynamical evolution of quantum annealing. It is shown there that we can reduce errors significantly by an ingenious choice of annealing schedule (time dependence of the control parameter) without compromising computational complexity qualitatively. A review is given on the derivation of the convergence condition for classical simulated annealing from the view point of quantum adiabaticity using a classical-quantum mapping.
Performance of a parallel algorithm for standard cell placement on the Intel Hypercube
NASA Technical Reports Server (NTRS)
Jones, Mark; Banerjee, Prithviraj
1987-01-01
A parallel simulated annealing algorithm for standard cell placement on the Intel Hypercube is presented. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than uniprocessor simulated annealing algorithms.
Selecting Magnet Laminations Recipes Using the Meth-od of Sim-u-la-ted Annealing
NASA Astrophysics Data System (ADS)
Russell, A. D.; Baiod, R.; Brown, B. C.; Harding, D. J.; Martin, P. S.
1997-05-01
The Fermilab Main Injector project is building 344 dipoles using more than 7000 tons of steel. Budget and logistical constraints required that steel production, lamination stamping and magnet fabrication proceed in parallel. There were significant run-to-run variations in the magnetic properties of the steel (Martin, P.S., et al., Variations in the Steel Properties and the Excitation Characteristics of FMI Dipoles, this conference). The large lamination size (>0.5 m coil opening) resulted in variations of gap height due to differences in stress relief in the steel after stamping. To minimize magnet-to-magnet strength and field shape variations the laminations were shuffled based on the available magnetic and mechanical data and assigned to magnets using a computer program based on the method of simulated annealing. The lamination sets selected by the program have produced magnets which easily satisfy the design requirements. Variations of the average magnet gap are an order of magnitude smaller than the variations in lamination gaps. This paper discusses observed gap variations, the program structure and the strength uniformity results.
An interactive system for creating object models from range data based on simulated annealing
Hoff, W.A.; Hood, F.W.; King, R.H.
1997-05-01
In hazardous applications such as remediation of buried waste and dismantlement of radioactive facilities, robots are an attractive solution. Sensing to recognize and locate objects is a critical need for robotic operations in unstructured environments. An accurate 3-D model of objects in the scene is necessary for efficient high level control of robots. Drawing upon concepts from supervisory control, the authors have developed an interactive system for creating object models from range data, based on simulated annealing. Site modeling is a task that is typically performed using purely manual or autonomous techniques, each of which has inherent strengths and weaknesses. However, an interactive modeling system combines the advantages of both manual and autonomous methods, to create a system that has high operator productivity as well as high flexibility and robustness. The system is unique in that it can work with very sparse range data, tolerate occlusions, and tolerate cluttered scenes. The authors have performed an informal evaluation with four operators on 16 different scenes, and have shown that the interactive system is superior to either manual or automatic methods in terms of task time and accuracy.
Equilibrium properties of transition-metal ion-argon clusters via simulated annealing
NASA Technical Reports Server (NTRS)
Asher, Robert L.; Micha, David A.; Brucat, Philip J.
1992-01-01
The geometrical structures of M(+) (Ar)n ions, with n = 1-14, have been studied by the minimization of a many-body potential surface with a simulated annealing procedure. The minimization method is justified for finite systems through the use of an information theory approach. It is carried out for eight potential-energy surfaces constructed with two- and three-body terms parametrized from experimental data and ab initio results. The potentials should be representative of clusters of argon atoms with first-row transition-metal monocations of varying size. The calculated geometries for M(+) = Co(+) and V(+) possess radial shells with small (ca. 4-8) first-shell coordination number. The inclusion of an ion-induced-dipole-ion-induced-dipole interaction between argon atoms raises the energy and generally lowers the symmetry of the cluster by promoting incomplete shell closure. Rotational constants as well as electric dipole and quadrupole moments are quoted for the Co(+) (Ar)n and V(+) (Ar)n predicted structures.
Optimal pumping from Palmela water supply wells (Portugal) using simulated annealing
NASA Astrophysics Data System (ADS)
Fragoso, Teresa; Cunha, Maria Da Conceição; Lobo-Ferreira, João P.
2009-12-01
Aquifer systems are an important part of an integrated water resources management plan as foreseen in the European Union’s Water Framework Directive (2000). The sustainable development of these systems demands the use of all available techniques capable of handling the multidisciplinary features of the problems involved. The formulation and resolution of an optimization model is described for a planning and management problem based on the Palmela aquifer (Portugal), developed to supply a given number of demand centres. This problem is solved using one of the latest optimization techniques, the simulated annealing heuristic method, designed to find the optimal solutions while avoiding falling into local optimums. The solution obtained, providing the wells location and the corresponding pumped flows to supply each centre, are analysed taking into account the objective function components and the constraints. It was found that the operation cost is the biggest share of the final cost, and the choice of wells is greatly affected by this fact. Another conclusion is that the solution takes advantage of the economies of scale, that is, it points toward drilling a large capacity well even if this increases the investment cost, rather than drilling several wells, which together will increase the operation costs.
Daylighting simulation: methods, algorithms, and resources
Carroll, William L.
1999-12-01
This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of
Huang Jinfan; Bartell, Lawrence S.
2012-01-15
Molecular dynamics simulations of solid state recrystallization and grain growth in iron nanoparticles containing 1436 atoms were carried out. During the period of relaxation of supercooled liquid drops and during thermal annealing of the solids they froze to, changes in disorder were followed by monitoring changes in energy and the migration of grain boundaries. All 27 polycrystalline nanoparticles, which were generated with different grain boundaries, were observed to recystallize into single crystals during annealing. Larger grains consumed the smaller ones. In particular, two sets of solid particles, designated as A and B, each with two grains, were treated to generate 18 members of each set with different thermal histories. This provided small ensembles (of 18 members each) from which rates at which the larger grain engulfed the smaller one, could be determined. The rate was higher, the smaller the degree of misorientation between the grains, a result contrary to the general rule based on published experiments, but the reason was clear. Crystal A, which happened to have a somewhat lower angle of misorientation, also had a higher population of defects, as confirmed by its higher energy. Accordingly, its driving force to recrystallize was greater. Although the mechanism of recrystallization is commonly called nucleation, our results, which probe the system on an atomic scale, were not able to identify nuclei unequivocally. By contrast, our technique can and does reveal nuclei in the freezing of liquids and in transformations from one solid phase to another. An alternative rationale for a nucleation-like process in our results is proposed. - Graphical Abstract: Time dependence of energy per atom in the quenching of liquid nanoparticles A-C of iron. Nanoparticle C freezes directly into a single crystal but A and B freeze to solids with two grains. A and B eventually recrystallize into single crystals. Highlights: Black-Right-Pointing-Pointer Solid state material
Piloted simulation of an on-board trajectory optimization algorithm
NASA Technical Reports Server (NTRS)
Price, D. B.; Calise, A. J.; Moerder, D. D.
1981-01-01
This paper will describe a real time piloted simulation of algorithms designed for on-board computation of time-optimal intercept trajectories for an F-8 aircraft. The algorithms, which were derived using singular perturbation theory, generate commands that are displayed to the pilot on flight director needles on the 8-ball. By flying the airplane so as to zero the horizontal and vertical needles, the pilot flies an approximation to a time-optimal intercept trajectory. The various display and computation modes that are available will be described and results will be presented illustrating the performance of the algorithms with a pilot in the loop.
Concluding Report: Quantitative Tomography Simulations and Reconstruction Algorithms
Aufderheide, M B; Martz, H E; Slone, D M; Jackson, J A; Schach von Wittenau, A E; Goodman, D M; Logan, C M; Hall, J M
2002-02-01
In this report we describe the original goals and final achievements of this Laboratory Directed Research and Development project. The Quantitative was Tomography Simulations and Reconstruction Algorithms project (99-ERD-015) funded as a multi-directorate, three-year effort to advance the state of the art in radiographic simulation and tomographic reconstruction by improving simulation and including this simulation in the tomographic reconstruction process. Goals were to improve the accuracy of radiographic simulation, and to couple advanced radiographic simulation tools with a robust, many-variable optimization algorithm. In this project, we were able to demonstrate accuracy in X-Ray simulation at the 2% level, which is an improvement of roughly a factor of 5 in accuracy, and we have successfully coupled our simulation tools with the CCG (Constrained Conjugate Gradient) optimization algorithm, allowing reconstructions that include spectral effects and blurring in the reconstructions. Another result of the project was the assembly of a low-scatter X-Ray imaging facility for use in nondestructive evaluation applications. We conclude with a discussion of future work.
Searching for quantum speedup in quasistatic quantum annealers
NASA Astrophysics Data System (ADS)
Amin, Mohammad H.
2015-11-01
We argue that a quantum annealer at very long annealing times is likely to experience a quasistatic evolution, returning a final population that is close to a Boltzmann distribution of the Hamiltonian at a single (freeze-out) point during the annealing. Such a system is expected to correlate with classical algorithms that return the same equilibrium distribution. These correlations do not mean that the evolution of the system is classical or can be simulated by these algorithms. The computation time extracted from such a distribution reflects the equilibrium behavior with no information about the underlying quantum dynamics. This makes the search for quantum speedup problematic.
Automated integration of genomic physical mapping data via parallel simulated annealing
Slezak, T.
1994-06-01
The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.
NASA Astrophysics Data System (ADS)
Rucker, Dale F.; Ferré, Ty P. A.
2005-07-01
The automated inversion of water content profiles from first arrival travel time data collected with zero-offset borehole ground penetrating radar is discussed. The inversion algorithm sets out to find the water content profile that minimizes a least-squares objective function representing the difference between the modeled and measured first arrival travel time. Ray-tracing analysis is used to determine the travel time for direct and critically refracted paths to identify the first arrival travel time. This automated method offers improvement over a previously presented graphical solution that considers both direct and critical refractions. Specifically, this approach can identify thinner layers and allow for the incorporation of uncertainty in the travel time measurements to determine the depth-specific uncertainty of the inferred water content profile through multiple simulations using a stochastic approach.
Energy conserving continuum algorithms for kinetic & gyrokinetic simulations of plasmas
NASA Astrophysics Data System (ADS)
Hakim, A.; Hammett, G. W.; Shi, E.; Stoltzfus-Dueck, T.
2015-11-01
We present high-order, energy conserving, continuum algorithms for the solution of gyrokinetic equations for use in edge turbulence simulations. The distribution function is evolved with a discontinuous Galerkin scheme, while the fields are evolved with a continuous finite-element method. These algorithms work for a general, possibly non-canonical, Poisson bracket operator and conserve energy exactly. Benchmark simulations with ETG turbulence in 3X/2V are shown, as well as initial applications of the algorithms to turbulence in a simplified SOL geometry. Sheath boundary conditions with recycling and secondary electron emission are implemented, and a Lenard-Bernstein collision operator is included. Extension of these algorithms to full Vlasov-Maxwell equations are presented. It is shown that with a particular choice of numerical fluxes the total (particle+field) energy is conserved. Algorithms are implemented in a flexible and open-source framework, Gkeyll, which also includes fluid models, allowing potential hybrid simulations of various plasma problems. Supported by the Max-Planck/Princeton Center for Plasma Physics, and DOE Contract DE-AC02-09CH11466.
Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"
Petzold, Linda R.
2012-10-25
Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.
Analysis of a simulation algorithm for direct brain drug delivery
Rosenbluth, Kathryn Hammond; Eschermann, Jan Felix; Mittermeyer, Gabriele; Thomson, Rowena; Mittermeyer, Stephan; Bankiewicz, Krystof S.
2011-01-01
Convection enhanced delivery (CED) achieves targeted delivery of drugs with a pressure-driven infusion through a cannula placed stereotactically in the brain. This technique bypasses the blood brain barrier and gives precise distributions of drugs, minimizing off-target effects of compounds such as viral vectors for gene therapy or toxic chemotherapy agents. The exact distribution is affected by the cannula positioning, flow rate and underlying tissue structure. This study presents an analysis of a simulation algorithm for predicting the distribution using baseline MRI images acquired prior to inserting the cannula. The MRI images included diffusion tensor imaging (DTI) to estimate the tissue properties. The algorithm was adapted for the devices and protocols identified for upcoming trials and validated with direct MRI visualization of Gadolinium in 20 infusions in non-human primates. We found strong agreement between the size and location of the simulated and gadolinium volumes, demonstrating the clinical utility of this surgical planning algorithm. PMID:21945468
Parallel conjugate gradient algorithms for manipulator dynamic simulation
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheld, Robert E.
1989-01-01
Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).
Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.
1997-01-01
The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.
Recycling random numbers in the stochastic simulation algorithm.
Yates, Christian A; Klingbeil, Guido
2013-03-01
The stochastic simulation algorithm (SSA) was introduced by Gillespie and in a different form by Kurtz. Since its original formulation there have been several attempts at improving the efficiency and hence the speed of the algorithm. We briefly discuss some of these methods before outlining our own simple improvement, the recycling direct method (RDM), and demonstrating that it is capable of increasing the speed of most stochastic simulations. The RDM involves the statistically acceptable recycling of random numbers in order to reduce the computational cost associated with their generation and is compatible with several of the pre-existing improvements on the original SSA. Our improvement is also sufficiently simple (one additional line of code) that we hope will be adopted by both trained mathematical modelers and experimentalists wishing to simulate their model systems. PMID:23485273
A spectral unaveraged algorithm for free electron laser simulations
Andriyash, I.A.; Lehe, R.; Malka, V.
2015-02-01
We propose and discuss a numerical method to model electromagnetic emission from the oscillating relativistic charged particles and its coherent amplification. The developed technique is well suited for free electron laser simulations, but it may also be useful for a wider range of physical problems involving resonant field–particles interactions. The algorithm integrates the unaveraged coupled equations for the particles and the electromagnetic fields in a discrete spectral domain. Using this algorithm, it is possible to perform full three-dimensional or axisymmetric simulations of short-wavelength amplification. In this paper we describe the method, its implementation, and we present examples of free electron laser simulations comparing the results with the ones provided by commonly known free electron laser codes.
NASA Astrophysics Data System (ADS)
Zhang, Pengpeng
The Leksell Gamma KnifeRTM (LGK) is a tool for providing accurate stereotactic radiosurgical treatment of brain lesions, especially tumors. Currently, the treatment planning team "forward" plans radiation treatment parameters while viewing a series of 2D MR scans. This primarily manual process is cumbersome and time consuming because the difficulty in visualizing the large search space for the radiation parameters (i.e., shot overlap, number, location, size, and weight). I hypothesize that a computer-aided "inverse" planning procedure that utilizes tumor geometry and treatment goals could significantly improve the planning process and therapeutic outcome of LGK radiosurgery. My basic observation is that the treatment team is best at identification of the location of the lesion and prescribing a lethal, yet safe, radiation dose. The treatment planning computer is best at determining both the 3D tumor geometry and optimal LGK shot parameters necessary to deliver a desirable dose pattern to the tumor while sparing adjacent normal tissue. My treatment planning procedure asks the neurosurgeon to identify the tumor and critical structures in MR images and the oncologist to prescribe a tumoricidal radiation dose. Computer-assistance begins with geometric modeling of the 3D tumor's medial axis properties. This begins with a new algorithm, a Gradient-Phase Plot (G-P Plot) decomposition of the tumor object's medial axis. I have found that medial axis seeding, while insufficient in most cases to produce an acceptable treatment plan, greatly reduces the solution space for Guided Evolutionary Simulated Annealing (GESA) treatment plan optimization by specifying an initial estimate for shot number, size, and location, but not weight. They are used to generate multiple initial plans which become initial seed plans for GESA. The shot location and weight parameters evolve and compete in the GESA procedure. The GESA objective function optimizes tumor irradiation (i.e., as close to
Hao, Ge-Fei; Xu, Wei-Fang; Yang, Sheng-Gang; Yang, Guang-Fu
2015-01-01
Protein and peptide structure predictions are of paramount importance for understanding their functions, as well as the interactions with other molecules. However, the use of molecular simulation techniques to directly predict the peptide structure from the primary amino acid sequence is always hindered by the rough topology of the conformational space and the limited simulation time scale. We developed here a new strategy, named Multiple Simulated Annealing-Molecular Dynamics (MSA-MD) to identify the native states of a peptide and miniprotein. A cluster of near native structures could be obtained by using the MSA-MD method, which turned out to be significantly more efficient in reaching the native structure compared to continuous MD and conventional SA-MD simulation. PMID:26492886
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Coalescent simulation in continuous space: algorithms for large neighbourhood size.
Kelleher, J; Etheridge, A M; Barton, N H
2014-08-01
Many species have an essentially continuous distribution in space, in which there are no natural divisions between randomly mating subpopulations. Yet, the standard approach to modelling these populations is to impose an arbitrary grid of demes, adjusting deme sizes and migration rates in an attempt to capture the important features of the population. Such indirect methods are required because of the failure of the classical models of isolation by distance, which have been shown to have major technical flaws. A recently introduced model of extinction and recolonisation in two dimensions solves these technical problems, and provides a rigorous technical foundation for the study of populations evolving in a spatial continuum. The coalescent process for this model is simply stated, but direct simulation is very inefficient for large neighbourhood sizes. We present efficient and exact algorithms to simulate this coalescent process for arbitrary sample sizes and numbers of loci, and analyse these algorithms in detail. PMID:24910324
SMMR Simulator radiative transfer calibration model. 2: Algorithm development
NASA Technical Reports Server (NTRS)
Link, S.; Calhoon, C.; Krupp, B.
1980-01-01
Passive microwave measurements performed from Earth orbit can be used to provide global data on a wide range of geophysical and meteorological phenomena. A Scanning Multichannel Microwave Radiometer (SMMR) is being flown on the Nimbus-G satellite. The SMMR Simulator duplicates the frequency bands utilized in the spacecraft instruments through an amalgamate of radiometer systems. The algorithm developed utilizes data from the fall 1978 NASA CV-990 Nimbus-G underflight test series and subsequent laboratory testing.
Sampling of general correlators in worm-algorithm based simulations
NASA Astrophysics Data System (ADS)
Rindlisbacher, Tobias; Åkerlund, Oscar; de Forcrand, Philippe
2016-08-01
Using the complex ϕ4-model as a prototype for a system which is simulated by a worm algorithm, we show that not only the charged correlator <ϕ* (x) ϕ (y) >, but also more general correlators such as < | ϕ (x) | | ϕ (y) | > or < arg (ϕ (x)) arg (ϕ (y)) >, as well as condensates like < | ϕ | >, can be measured at every step of the Monte Carlo evolution of the worm instead of on closed-worm configurations only. The method generalizes straightforwardly to other systems simulated by worms, such as spin or sigma models.
Large Eddy Simulations using Lattice Boltzmann algorithms. Final report
Serling, J.D.
1993-09-28
This report contains the results of a study performed to implement eddy-viscosity models for Large-Eddy-Simulations (LES) into Lattice Boltzmann (LB) algorithms for simulating fluid flows. This implementation requires modification of the LB method of simulating the incompressible Navier-Stokes equations to allow simulation of the filtered Navier-Stokes equations with some subgrid model for the Reynolds stress term. We demonstrate that the LB method can indeed be used for LES by simply locally adjusting the value of the BGK relaxation time to obtain the desired eddy-viscosity. Thus, many forms of eddy-viscosity models including the standard Smagorinsky model or the Dynamic model may be implemented using LB algorithms. Since underresolved LB simulations often lead to instability, the LES model actually serves to stabilize the method. An alternative method of ensuring stability is presented which requires that entropy increase during the collision step of the LB method. Thus, an alternative collision operator is locally applied if the entropy becomes too low. This stable LB method then acts as an LES scheme that effectively introduces its own eddy viscosity to damp short wavelength oscillations.
An improved sink particle algorithm for SPH simulations
NASA Astrophysics Data System (ADS)
Hubber, D. A.; Walch, S.; Whitworth, A. P.
2013-04-01
Numerical simulations of star formation frequently rely on the implementation of sink particles: (a) to avoid expending computational resource on the detailed internal physics of individual collapsing protostars, (b) to derive mass functions, binary statistics and clustering kinematics (and hence to make comparisons with observation), and (c) to model radiative and mechanical feedback; sink particles are also used in other contexts, for example to represent accreting black holes in galactic nuclei. We present a new algorithm for creating and evolving sink particles in smoothed particle hydrodynamic (SPH) simulations, which appears to represent a significant improvement over existing algorithms - particularly in situations where sinks are introduced after the gas has become optically thick to its own cooling radiation and started to heat up by adiabatic compression. (i) It avoids spurious creation of sinks. (ii) It regulates the accretion of matter on to a sink so as to mitigate non-physical perturbations in the vicinity of the sink. (iii) Sinks accrete matter, but the associated angular momentum is transferred back to the surrounding medium. With the new algorithm - and modulo the need to invoke sufficient resolution to capture the physics preceding sink formation - the properties of sinks formed in simulations are essentially independent of the user-defined parameters of sink creation, or the number of SPH particles used.
Motion Cueing Algorithm Modification for Improved Turbulence Simulation
NASA Technical Reports Server (NTRS)
Ercole, Anthony V.; Cardullo, Frank M.; Zaychik, Kirill; Kelly, Lon C.; Houck, Jacob
2009-01-01
Atmospheric turbulence cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. Cardullo and Ellor initially addressed this problem by directly porting the turbulence model output to the motion system. Reid and Robinson addressed the problem by employing a parallel aircraft model, which is only stimulated by the turbulence inputs and adding a filter specially designed to pass the higher turbulence frequencies. There have been advances in motion cueing algorithm development at the Man-Machine Systems Laboratory, at SUNY Binghamton. In particular, the system used to generate turbulence cues has been studied. The Reid approach, implemented by Telban and Cardullo, was employed to augment the optimal motion cueing algorithm installed at the NASA LaRC Simulation Laboratory, driving the Visual Motion Simulator. In this implementation, the output of the primary flight channel was added to the output of the turbulence channel and then sent through a non-linear cueing filter. The cueing filter is an adaptive filter; therefore, it is not desirable for the output of the turbulence channel to be augmented by this type of filter. The likelihood of the signal becoming divergent was also an issue in this design. After testing on-site it became apparent that the architecture of the turbulence algorithm was generating unacceptable cues. As mentioned above, this cueing algorithm comprised a filter that was designed to operate at low bandwidth. Therefore, the turbulence was also filtered, augmenting the cues generated by the model. If any filtering is to be done to the turbulence, it will utilize a filter with a much higher bandwidth, above the frequencies produced by the aircraft response to turbulence. The authors have developed an implementation wherein only the signal from the primary flight channel passes through the nonlinear cueing filter. This paper discusses three
Massively parallel algorithms for trace-driven cache simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.
1991-01-01
Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.
Okamoto, Y
1994-04-01
Monte Carlo simulated annealing is applied to the tertiary structure prediction of a 17-residue synthetic peptide, which is known by experiment to exhibit high helical content at low pH. Two dielectric models are considered: sigmoidal distance-dependent dielectric function and a constant dielectric function (epsilon = 2). Starting from completely random initial conformations, our simulations for both dielectric models at low pH gave many helical conformations. The obtained low-energy conformations are compared with the nuclear Overhauser effect spectroscopy cross-peak data for both main chain and side chains, and it is shown that the results for the sigmoidal dielectric function are in remarkable agreement with the experimental data. The results predict the existence of two disjoint helices around residues 5-9 and 11-16, while nmr experiments imply significant alpha-helix content between residues 5 and 14. Simulations with high pH, on the other hand, hardly gave a helical conformation, which is also in accord with the experiment. These findings indicate that when side chains are charged, electrostatic interactions due to these changes play a major role in the helix stability. Our results are compared with the previous 500 ps molecular dynamics simulations of the same peptide. It is argued that simulated annealing is superior to molecular dynamics in two respects: (1) direct folding of alpha-helix from completely random initial conformations is possible for the former, whereas only unfolding of an alpha-helix can be studied by the latter; (2) while both methods predict high helix content for low pH, the results for high pH agree with experiment (low helix content) only for the former method. PMID:8186363
An Initial Examination for Verifying Separation Algorithms by Simulation
NASA Technical Reports Server (NTRS)
White, Allan L.; Neogi, Natasha; Herencia-Zapana, Heber
2012-01-01
An open question in algorithms for aircraft is what can be validated by simulation where the simulation shows that the probability of undesirable events is below some given level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first proposes a goal based on the number of flights per year in several regions. The paper examines the probabilistic interpretation of this goal and computes the number of trials needed to establish it at an equivalent confidence level. Since any simulation is likely to consider the algorithms for only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. This paper is an initial effort, and as such, it considers separation maneuvers, which are elementary but include numerous aspects of aircraft behavior. The scenario includes decisions under uncertainty since the position of each aircraft is only known to the other by broadcasting where GPS believes each aircraft to be (ADS-B). Each aircraft operates under feedback control with perturbations. It is shown that a scenario three or four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.
Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.
2012-01-01
Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.
Parametric Quantum Search Algorithm as Quantum Walk: A Quantum Simulation
NASA Astrophysics Data System (ADS)
Ellinas, Demosthenes; Konstandakis, Christos
2016-02-01
Parametric quantum search algorithm (PQSA) is a form of quantum search that results by relaxing the unitarity of the original algorithm. PQSA can naturally be cast in the form of quantum walk, by means of the formalism of oracle algebra. This is due to the fact that the completely positive trace preserving search map used by PQSA, admits a unitarization (unitary dilation) a la quantum walk, at the expense of introducing auxiliary quantum coin-qubit space. The ensuing QW describes a process of spiral motion, chosen to be driven by two unitary Kraus generators, generating planar rotations of Bloch vector around an axis. The quadratic acceleration of quantum search translates into an equivalent quadratic saving of the number of coin qubits in the QW analogue. The associated to QW model Hamiltonian operator is obtained and is shown to represent a multi-particle long-range interacting quantum system that simulates parametric search. Finally, the relation of PQSA-QW simulator to the QW search algorithm is elucidated.
A New Simulation Algorithm Combining Fluid and Kinetic Properties
NASA Astrophysics Data System (ADS)
Larson, David; Hewett, Dennis
2007-11-01
Complex Particle Kinetics (CPK) [1,2] uses particles with internal degrees of freedom in an effort to simulate the transition between continuum and kinetic dynamics. Recent work [3] has provided a new path towards extending the adaptive particle capabilities of CPK. The resulting algorithm bridges the gap between fluid and kinetic regimes. The method uses an ensemble of macro-particles with a Gaussian spatial profile and a Mawellian velocity distribution to represent particle distributions in phase space. In addition to the standard PIC quantities of location, drift velocity, mass, and charge, the macro-particles also carry width, thermal velocity, and an internal velocity. The particle shape, internal velocity, and drift velocity respond to internal and eternal forces. The particles can contract, expand, rotate, and pass through one another. The algorithm allows arbitrary collisionality and functions effectively in the collision-dominated limit. We will present details of the algorithm as well as the results from several simulations. [1] D. W. Hewett, J. Comp. Phys. 189 (2003). [2] D. J. Larson, J. Comp. Phys. 188 (2003). [3] C. Gauger, et.al., SIAM J. Numer. Anal. 37 (2000).
Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert O.; Gao, Fei; Sun, Xin
2013-05-15
Fission gas bubble is one of evolving microstructures, which affect thermal mechanical properties such as thermo-conductivity, gas release, volume swelling, and cracking, in operating nuclear fuels. Therefore, fundamental understanding of gas bubble evolution kinetics is essential to predict the thermodynamic property and performance changes of fuels. In this work, a generic phasefield model was developed to describe the evolution kinetics of intra-granular fission gas bubbles in UO2 fuels under post-irradiation thermal annealing conditions. Free energy functional and model parameters are evaluated from atomistic simulations and experiments. Critical nuclei size of the gas bubble and gas bubble evolution were simulated. A linear relationship between logarithmic bubble number density and logarithmic mean bubble diameter is predicted which is in a good agreement with experimental data.
Nakos, J.T.; Rosinski, S.T.; Acton, R.U.
1994-11-01
The objective of this work was to provide experimental heat transfer boundary condition and reactor pressure vessel (RPV) section thermal response data that can be used to benchmark computer codes that simulate thermal annealing of RPVS. This specific protect was designed to provide the Electric Power Research Institute (EPRI) with experimental data that could be used to support the development of a thermal annealing model. A secondary benefit is to provide additional experimental data (e.g., thermal response of concrete reactor cavity wall) that could be of use in an annealing demonstration project. The setup comprised a heater assembly, a 1.2 in {times} 1.2 m {times} 17.1 cm thick [4 ft {times} 4 ft {times} 6.75 in] section of an RPV (A533B ferritic steel with stainless steel cladding), a mockup of the {open_quotes}mirror{close_quotes} insulation between the RPV and the concrete reactor cavity wall, and a 25.4 cm [10 in] thick concrete wall, 2.1 in {times} 2.1 in [10 ft {times} 10 ft] square. Experiments were performed at temperature heat-up/cooldown rates of 7, 14, and 28{degrees}C/hr [12.5, 25, and 50{degrees}F/hr] as measured on the heated face. A peak temperature of 454{degrees}C [850{degrees}F] was maintained on the heated face until the concrete wall temperature reached equilibrium. Results are most representative of those RPV locations where the heat transfer would be 1-dimensional. Temperature was measured at multiple locations on the heated and unheated faces of the RPV section and the concrete wall. Incident heat flux was measured on the heated face, and absorbed heat flux estimates were generated from temperature measurements and an inverse heat conduction code. Through-wall temperature differences, concrete wall temperature response, heat flux absorbed into the RPV surface and incident on the surface are presented. All of these data are useful to modelers developing codes to simulate RPV annealing.
Simulating mesoscopic reaction-diffusion systems using the Gillespie algorithm
Bernstein, David
2004-12-12
We examine an application of the Gillespie algorithm to simulating spatially inhomogeneous reaction-diffusion systems in mesoscopic volumes such as cells and microchambers. The method involves discretizing the chamber into elements and modeling the diffusion of chemical species by the movement of molecules between neighboring elements. These transitions are expressed in the form of a set of reactions which are added to the chemical system. The derivation of the rates of these diffusion reactions is by comparison with a finite volume discretization of the heat equation on an unevenly spaced grid. The diffusion coefficient of each species is allowed to be inhomogeneous in space, including discontinuities. The resulting system is solved by the Gillespie algorithm using the fast direct method. We show that in an appropriate limit the method reproduces exact solutions of the heat equation for a purely diffusive system and the nonlinear reaction-rate equation describing the cubic autocatalytic reaction.
Self-adaptive genetic algorithms with simulated binary crossover.
Deb, K; Beyer, H G
2001-01-01
Self-adaptation is an essential feature of natural evolution. However, in the context of function optimization, self-adaptation features of evolutionary search algorithms have been explored mainly with evolution strategy (ES) and evolutionary programming (EP). In this paper, we demonstrate the self-adaptive feature of real-parameter genetic algorithms (GAs) using a simulated binary crossover (SBX) operator and without any mutation operator. The connection between the working of self-adaptive ESs and real-parameter GAs with the SBX operator is also discussed. Thereafter, the self-adaptive behavior of real-parameter GAs is demonstrated on a number of test problems commonly used in the ES literature. The remarkable similarity in the working principle of real-parameter GAs and self-adaptive ESs shown in this study suggests the need for emphasizing further studies on self-adaptive GAs. PMID:11382356
Simulating mesoscopic reaction-diffusion systems using the Gillespie algorithm.
Bernstein, David
2005-04-01
We examine an application of the Gillespie algorithm to simulating spatially inhomogeneous reaction-diffusion systems in mesoscopic volumes such as cells and microchambers. The method involves discretizing the chamber into elements and modeling the diffusion of chemical species by the movement of molecules between neighboring elements. These transitions are expressed in the form of a set of reactions which are added to the chemical system. The derivation of the rates of these diffusion reactions is by comparison with a finite volume discretization of the heat equation on an unevenly spaced grid. The diffusion coefficient of each species is allowed to be inhomogeneous in space, including discontinuities. The resulting system is solved by the Gillespie algorithm using the fast direct method. We show that in an appropriate limit the method reproduces exact solutions of the heat equation for a purely diffusive system and the nonlinear reaction-rate equation describing the cubic autocatalytic reaction. PMID:15903653
An algorithm for protein engineering: simulations of recursive ensemble mutagenesis.
Arkin, A P; Youvan, D C
1992-01-01
An algorithm for protein engineering, termed recursive ensemble mutagenesis, has been developed to produce diverse populations of phenotypically related mutants whose members differ in amino acid sequence. This method uses a feedback mechanism to control successive rounds of combinatorial cassette mutagenesis. Starting from partially randomized "wild-type" DNA sequences, a highly parallel search of sequence space for peptides fitting an experimenter's criteria is performed. Each iteration uses information gained from the previous rounds to search the space more efficiently. Simulations of the technique indicate that, under a variety of conditions, the algorithm can rapidly produce a diverse population of proteins fitting specific criteria. In the experimental analog, genetic selection or screening applied during recursive ensemble mutagenesis should force the evolution of an ensemble of mutants to a targeted cluster of related phenotypes. Images PMID:1502200
Fast Particle Pair Detection Algorithms for Particle Simulations
NASA Astrophysics Data System (ADS)
Iwai, T.; Hong, C.-W.; Greil, P.
New algorithms with O(N) complexity have been developed for fast particle-pair detections in particle simulations like the discrete element method (DEM) and molecular dynamic (MD). They exhibit robustness against broad particle size distributions when compared with conventional boxing methods. Almost similar calculation speeds are achieved at particle size distributions from is mono-size to 1:10 while the linked-cell method results in calculations more than 20 times. The basic algorithm, level-boxing, uses the variable search range according to each particle. The advanced method, multi-level boxing, employs multiple cell layers to reduce the particle size discrepancy. Another method, indexed-level boxing, reduces the size of cell arrays by introducing the hash procedure to access the cell array, and is effective for sparse particle systems with a large number of particles.
NASA Astrophysics Data System (ADS)
Balin Talamba, D.; Higy, C.; Joerin, C.; Musy, A.
The paper presents an application concerning the hydrological modelling for the Haute-Mentue catchment, located in western Switzerland. A simplified version of Topmodel, developed in a Labview programming environment, was applied in the aim of modelling the hydrological processes on this catchment. Previous researches car- ried out in this region outlined the importance of the environmental tracers in studying the hydrological behaviour and an important knowledge has been accumulated dur- ing this period concerning the mechanisms responsible for runoff generation. In con- formity with the theoretical constraints, Topmodel was applied for an Haute-Mentue sub-catchment where tracing experiments showed constantly low contributions of the soil water during the flood events. The model was applied for two humid periods in 1998. First, the model calibration was done in order to provide the best estimations for the total runoff. Instead, the simulated components (groundwater and rapid flow) showed far deviations from the reality indicated by the tracing experiments. Thus, a new calibration was performed including additional information given by the environ- mental tracing. The calibration of the model was done by using simulated annealing (SA) techniques, which are easy to implement and statistically allow for converging to a global minimum. The only problem is that the method is time and computer consum- ing. To improve this, a version of SA was used which is known as very fast-simulated annealing (VFSA). The principles are the same as for the SA technique. The random search is guided by certain probability distribution and the acceptance criterion is the same as for SA but the VFSA allows for better taking into account the ranges of vari- ation of each parameter. Practice with Topmodel showed that the energy function has different sensitivities along different dimensions of the parameter space. The VFSA algorithm allows differentiated search in relation with the
Performance of a parallel algorithm for standard cell placement on the Intel Hypercube
NASA Technical Reports Server (NTRS)
Jones, Mark; Banerjee, Prithviraj
1987-01-01
A parallel simulated annealing algorithm for standard cell placement that is targeted to run on the Intel Hypercube is presented. A tree broadcasting strategy that is used extensively in our algorithm for updating cell locations in the parallel environment is presented. Studies on the performance of our algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms.
Concurrent Algorithm For Particle-In-Cell Simulations
NASA Technical Reports Server (NTRS)
Liewer, Paulett C.; Decyk, Viktor K.
1990-01-01
Separate decompositions used for particle-motion and field calculations. General Concurrent Particle-in-Cell (GCPIC) algorithm used to implement motions of individual plasma particles (ions and electrons) under influence of particle-in-cell (PIC) computer codes on concurrent processors. Simulates motions of individual plasma particles under influence of electromagnetic fields generated by particles themselves. Performed to study variety of nonlinear problems in plasma physics, including magnetic and inertial fusion, plasmas in outer space, propagation of electron and ion beams, free-electron lasers, and particle accelerators.
Direct simulation Monte Carlo method with a focal mechanism algorithm
NASA Astrophysics Data System (ADS)
Rachman, Asep Nur; Chung, Tae Woong; Yoshimoto, Kazuo; Yun, Sukyoung
2015-01-01
To simulate the observation of the radiation pattern of an earthquake, the direct simulation Monte Carlo (DSMC) method is modified by implanting a focal mechanism algorithm. We compare the results of the modified DSMC method (DSMC-2) with those of the original DSMC method (DSMC-1). DSMC-2 shows more or similarly reliable results compared to those of DSMC-1, for events with 12 or more recorded stations, by weighting twice for hypocentral distance of less than 80 km. Not only the number of stations, but also other factors such as rough topography, magnitude of event, and the analysis method influence the reliability of DSMC-2. The most reliable result by DSMC-2 is obtained by the best azimuthal coverage by the largest number of stations. The DSMC-2 method requires shorter time steps and a larger number of particles than those of DSMC-1 to capture a sufficient number of arrived particles in the small-sized receiver.
Constant-complexity stochastic simulation algorithm with optimal binning
Sanft, Kevin R.; Othmer, Hans G.
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2008-01-01
Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.
Data fusion through simulated annealing of registered range and reflectance images
Beckerman, M.; Sweeney, F.J.
1993-06-01
In this paper we present results of a study of registered range and reflectance images acquired using a prototype amplitude-modulated CW laser radar. Ranging devices such as laser radars represent new technologies which are being applied in aerospace, nuclear and other hazardous environments where remote inspections, 3D identifications and measurements are required. However, data acquired using devices of this type may contain non-stationary, signal-dependent noise, range-reflectance crosstalk and low-reflectance range artifacts. Low level fusion algorithms play an essential role in achieving reliable performace by handling the complex noise, systematic errors and artifacts. The objective of our study is the development of a stochastic fusion algorithm which takes as its input the registered image pair and produces as its output a reliable description of the underlying physical scene in terms of locally smooth surfaces separated by well-defined depth discontinuities. To construct the algorithm we model each image as a set of coupled Markov random fields representing pixel and several orders of line processes. Within this framework we (i) impose local smoothness constraints, introducing a simple linearity property in place of the usual sums over clique potentials; (ii) fuse the range and reflectance images through line process couplings, and (iii) use nonstationary, signal-dependent variances, adaptive thresholding, and a form of Markov natural selection. We show that the resulting algorithm yields reliable results even in worst-case scenarios.
Data fusion through simulated annealing of registered range and reflectance images
Beckerman, M.; Sweeney, F.J.
1993-01-01
In this paper we present results of a study of registered range and reflectance images acquired using a prototype amplitude-modulated CW laser radar. Ranging devices such as laser radars represent new technologies which are being applied in aerospace, nuclear and other hazardous environments where remote inspections, 3D identifications and measurements are required. However, data acquired using devices of this type may contain non-stationary, signal-dependent noise, range-reflectance crosstalk and low-reflectance range artifacts. Low level fusion algorithms play an essential role in achieving reliable performace by handling the complex noise, systematic errors and artifacts. The objective of our study is the development of a stochastic fusion algorithm which takes as its input the registered image pair and produces as its output a reliable description of the underlying physical scene in terms of locally smooth surfaces separated by well-defined depth discontinuities. To construct the algorithm we model each image as a set of coupled Markov random fields representing pixel and several orders of line processes. Within this framework we (i) impose local smoothness constraints, introducing a simple linearity property in place of the usual sums over clique potentials; (ii) fuse the range and reflectance images through line process couplings, and (iii) use nonstationary, signal-dependent variances, adaptive thresholding, and a form of Markov natural selection. We show that the resulting algorithm yields reliable results even in worst-case scenarios.
NASA Astrophysics Data System (ADS)
Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza
2016-06-01
This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.
Verifying Algorithms for Autonomous Aircraft by Simulation Generalities and Example
NASA Technical Reports Server (NTRS)
White, Allan L.
2010-01-01
An open question in Air Traffic Management is what procedures can be validated by simulation where the simulation shows that the probability of undesirable events is below the required level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first examines the probabilistic interpretation of a typical requirement by a regulatory agency and computes the number of trials needed to establish the requirement at an equivalent confidence level. Since any simulation is likely to consider only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. The paper establishes a separation algorithm at the required confidence level where the aircraft operates under feedback control as is subject to perturbations. There is a discussion where it is shown that a scenario three of four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.
On constructing optimistic simulation algorithms for the discrete event system specification
Nutaro, James J
2008-01-01
This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models.
Jacob, Dayee Raben, Adam; Sarkar, Abhirup; Grimm, Jimm; Simpson, Larry
2008-11-01
Purpose: To perform an independent validation of an anatomy-based inverse planning simulated annealing (IPSA) algorithm in obtaining superior target coverage and reducing the dose to the organs at risk. Method and Materials: In a recent prostate high-dose-rate brachytherapy protocol study by the Radiation Therapy Oncology Group (0321), our institution treated 20 patients between June 1, 2005 and November 30, 2006. These patients had received a high-dose-rate boost dose of 19 Gy to the prostate, in addition to an external beam radiotherapy dose of 45 Gy with intensity-modulated radiotherapy. Three-dimensional dosimetry was obtained for the following optimization schemes in the Plato Brachytherapy Planning System, version 14.3.2, using the same dose constraints for all the patients treated during this period: anatomy-based IPSA optimization, geometric optimization, and dose point optimization. Dose-volume histograms were generated for the planning target volume and organs at risk for each optimization method, from which the volume receiving at least 75% of the dose (V{sub 75%}) for the rectum and bladder, volume receiving at least 125% of the dose (V{sub 125%}) for the urethra, and total volume receiving the reference dose (V{sub 100%}) and volume receiving 150% of the dose (V{sub 150%}) for the planning target volume were determined. The dose homogeneity index and conformal index for the planning target volume for each optimization technique were compared. Results: Despite suboptimal needle position in some implants, the IPSA algorithm was able to comply with the tight Radiation Therapy Oncology Group dose constraints for 90% of the patients in this study. In contrast, the compliance was only 30% for dose point optimization and only 5% for geometric optimization. Conclusions: Anatomy-based IPSA optimization proved to be the superior technique and also the fastest for reducing the dose to the organs at risk without compromising the target coverage.
Li, Yang; Li, JiaHao; Liu, BaiXin
2015-10-28
Nucleation is one of the most essential transformation paths in phase transition and exerts a significant influence on the crystallization process. Molecular dynamics simulations were performed to investigate the atomic-scale nucleation mechanisms of NiTi metallic glasses upon devitrification at various temperatures (700 K, 750 K, 800 K, and 850 K). Our simulations reveal that at 700 K and 750 K, nucleation is polynuclear with high nucleation density, while at 800 K it is mononuclear. The underlying nucleation mechanisms have been clarified, manifesting that nucleation can be induced either by the initial ordered clusters (IOCs) or by the other precursors of nuclei evolved directly from the supercooled liquid. IOCs and other precursors stem from the thermal fluctuations of bond orientational order in supercooled liquids during the quenching process and during the annealing process, respectively. The simulation results not only elucidate the underlying nucleation mechanisms varied with temperature, but also unveil the origin of nucleation. These discoveries offer new insights into the devitrification mechanism of metallic glasses. PMID:26414845
Algorithm design for a gun simulator based on image processing
NASA Astrophysics Data System (ADS)
Liu, Yu; Wei, Ping; Ke, Jun
2015-08-01
In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.
An algorithm to build mock galaxy catalogues using MICE simulations
NASA Astrophysics Data System (ADS)
Carretero, J.; Castander, F. J.; Gaztañaga, E.; Crocce, M.; Fosalba, P.
2015-02-01
We present a method to build mock galaxy catalogues starting from a halo catalogue that uses halo occupation distribution (HOD) recipes as well as the subhalo abundance matching (SHAM) technique. Combining both prescriptions we are able to push the absolute magnitude of the resulting catalogue to fainter luminosities than using just the SHAM technique and can interpret our results in terms of the HOD modelling. We optimize the method by populating with galaxies friends-of-friends dark matter haloes extracted from the Marenostrum Institut de Ciències de l'Espai dark matter simulations and comparing them to observational constraints. Our resulting mock galaxy catalogues manage to reproduce the observed local galaxy luminosity function and the colour-magnitude distribution as observed by the Sloan Digital Sky Survey. They also reproduce the observed galaxy clustering properties as a function of luminosity and colour. In order to achieve that, the algorithm also includes scatter in the halo mass-galaxy luminosity relation derived from direct SHAM and a modified Navarro-Frenk-White mass density profile to place satellite galaxies in their host dark matter haloes. Improving on general usage of the HOD that fits the clustering for given magnitude limited samples, our catalogues are constructed to fit observations at all luminosities considered and therefore for any luminosity subsample. Overall, our algorithm is an economic procedure of obtaining galaxy mock catalogues down to faint magnitudes that are necessary to understand and interpret galaxy surveys.
Simulating Future GPS Clock Scenarios with Two Composite Clock Algorithms
NASA Technical Reports Server (NTRS)
Suess, Matthias; Matsakis, Demetrios; Greenhall, Charles A.
2010-01-01
Using the GPS Toolkit, the GPS constellation is simulated using 31 satellites (SV) and a ground network of 17 monitor stations (MS). At every 15-minutes measurement epoch, the monitor stations measure the time signals of all satellites above a parameterized elevation angle. Once a day, the satellite clock estimates the station and satellite clocks. The first composite clock (B) is based on the Brown algorithm, and is now used by GPS. The second one (G) is based on the Greenhall algorithm. The composite clock of G and B performance are investigated using three ground-clock models. Model C simulates the current GPS configuration, in which all stations are equipped with cesium clocks, except for masers at USNO and Alternate Master Clock (AMC) sites. Model M is an improved situation in which every station is equipped with active hydrogen masers. Finally, Models F and O are future scenarios in which the USNO and AMC stations are equipped with fountain clocks instead of masers. Model F is a rubidium fountain, while Model O is more precise but futuristic Optical Fountain. Each model is evaluated using three performance metrics. The timing-related user range error having all satellites available is the first performance index (PI1). The second performance index (PI2) relates to the stability of the broadcast GPS system time itself. The third performance index (PI3) evaluates the stability of the time scales computed by the two composite clocks. A distinction is made between the "Signal-in-Space" accuracy and that available through a GNSS receiver.
Simulations and measurements of annealed pyrolytic graphite-metal composite baseplates
NASA Astrophysics Data System (ADS)
Streb, F.; Ruhl, G.; Schubert, A.; Zeidler, H.; Penzel, M.; Flemmig, S.; Todaro, I.; Squatrito, R.; Lampke, T.
2016-03-01
We investigated the usability of anisotropic materials as inserts in aluminum-matrix-composite baseplates for typical high performance power semiconductor modules using finite-element simulations and transient plane source measurements. For simulations, several physical modules can be used, which are suitable for different thermal boundary conditions. By comparing different modules and options of heat transfer we found non-isothermal simulations to be closest to reality for temperature distribution at the surface of the heat sink. We optimized the geometry of the graphite inserts for best heat dissipation and based on these results evaluated the thermal resistance of a typical power module using calculation time optimized steady-state simulations. Here we investigated the influence of thermal contact conductance (TCC) between metal matrix and inserts on the heat dissipation. We found improved heat dissipation compared to the plain metal baseplate for a TCC of 200 kW/m2/K and above.To verify the simulations we evaluated cast composite baseplates with two different insert geometries and measured their averaged lateral thermal conductivity using a transient plane source (HotDisk) technique at room temperature. For the composite baseplate we achieved local improvements in heat dissipation compared to the plain metal baseplate.
A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation
NASA Astrophysics Data System (ADS)
Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava
2015-12-01
In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Robotic space simulation integration of vision algorithms into an orbital operations simulation
NASA Technical Reports Server (NTRS)
Bochsler, Daniel C.
1987-01-01
In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.
NASA Astrophysics Data System (ADS)
Behrman, Elizabeth; Steck, James
We propose and develop a new quantum algorithm, whereby a quantum system can learn to anneal to a desired ground state. We demonstrate successful learning of entanglement for a two-qubit system, then bootstrap to larger systems. We also show that the method is robust to noise and decoherence.
Understanding disordered systems through numerical simulation and algorithm development
NASA Astrophysics Data System (ADS)
Sweeney, Sean Michael
Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising
Setyawan, Wahyu; Nandipati, Giridhar; Roche, Kenneth J.; Heinisch, Howard L.; Wirth, Brian D.; Kurtz, Richard J.
2015-07-01
Molecular dynamics simulations have been used to generate a comprehensive database of surviving defects due to displacement cascades in bulk tungsten. Twenty-one data points of primary knock-on atom (PKA) energies ranging from 100 eV (sub-threshold energy) to 100 keV (~780^{×}_{Ed}, where _{Ed} = 128 eV is the average displacement threshold energy) have been completed at 300 K, 1025 K and 2050 K. Within this range of PKA energies, two regimes of power-law energy-dependence of the defect production are observed. A distinct power-law exponent characterizes the number of Frenkel pairs produced within each regime. The two regimes intersect at a transition energy which occurs at approximately 250^{×}_{Ed}. The transition energy also marks the onset of the formation of large self-interstitial atom (SIA) clusters (size 14 or more). The observed defect clustering behavior is asymmetric, with SIA clustering increasing with temperature, while the vacancy clustering decreases. This asymmetry increases with temperature such that at 2050 K (~0.5_{Tm}) practically no large vacancy clusters are formed, meanwhile large SIA clusters appear in all simulations. The implication of such asymmetry on the long-term defect survival and damage accumulation is discussed. In addition, <100> {110} SIA loops are observed to form directly in the highest energy cascades, while vacancy <100> loops are observed to form at the lowest temperature and highest PKA energies, although the appearance of both the vacancy and SIA loops with Burgers vector of <100> type is relatively rare.
Holak, T.A.; Gronenborn, A.M.; Clore, G.M. ); Engstroem, A.; Kraulis, P.J.; Lindeberg, G.; Bennich, H.; Jones, T.A. )
1988-10-04
The solution conformation of the antibacterial polypeptide cecropin A from the Cecropia moth is investigated by nuclear magnetic resonance (NMR) spectroscopy under conditions where it adopts a fully ordered structure, as judged by previous circular dichroism studies. By use of a combination of two-dimensional NMR techniques the {sup 1}H NMR spectrum of cecropin A is completely assigned. A set of 243 approximate interproton distance restraints is derived from nuclear Overhauser enhancement (NOE) measurements. These, together with 32 restraints for the 16 intrahelical hydrogen bonds identified on the basis of the pattern of short-range NOEs, form the basis of a three-dimensional structure determination by dynamical simulated annealing. The calculations are carried out starting from three initial structures, an {alpha}-helix, an extended {beta}-strand, and a mixed {alpha}/{beta} structure. Seven independent structures are computed from each starting structure by using a different random number seeds for the assignments of the initial velocities. Analysis of the 21 converged structure indicates that there are two helical regions extending from residues 5 to 21 and from residues 24 to 37 which are very well defined in terms of both atomic root mean square differences and backbone torsion angles. The long axes of the two helices lie in two planes, which are at an angle of 70-100{degree} to each other. The orientation of the helices within these planes, however, cannot be determined due to the paucity of NOEs between the two helices.
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency. PMID:26211074
NASA Technical Reports Server (NTRS)
Farhat, Nabil H.
1987-01-01
Self-organization and learning is a distinctive feature of neural nets and processors that sets them apart from conventional approaches to signal processing. It leads to self-programmability which alleviates the problem of programming complexity in artificial neural nets. In this paper architectures for partitioning an optoelectronic analog of a neural net into distinct layers with prescribed interconnectivity pattern to enable stochastic learning by simulated annealing in the context of a Boltzmann machine are presented. Stochastic learning is of interest because of its relevance to the role of noise in biological neural nets. Practical considerations and methodologies for appreciably accelerating stochastic learning in such a multilayered net are described. These include the use of parallel optical computing of the global energy of the net, the use of fast nonvolatile programmable spatial light modulators to realize fast plasticity, optical generation of random number arrays, and an adaptive noisy thresholding scheme that also makes stochastic learning more biologically plausible. The findings reported predict optoelectronic chips that can be used in the realization of optical learning machines.
NASA Astrophysics Data System (ADS)
Dafflon, B.; Barrash, W.
2012-05-01
Estimation of the three-dimensional (3-D) distribution of hydrologic properties and related uncertainty is a key for improved predictions of hydrologic processes in the subsurface. However it is difficult to gain high-quality and high-density hydrologic information from the subsurface. In this regard a promising strategy is to use high-resolution geophysical data (that are relatively sensitive to variations of a hydrologic parameter of interest) to supplement direct hydrologic information from measurements in wells (e.g., logs, vertical profiles) and then generate stochastic simulations of the distribution of the hydrologic property conditioned on the hydrologic and geophysical data. In this study we develop and apply this strategy for a 3-D field experiment in the heterogeneous aquifer at the Boise Hydrogeophysical Research Site and we evaluate how much benefit the geophysical data provide. We run high-resolution 3-D conditional simulations of porosity with both simulated-annealing-based and Bayesian sequential approaches using information from multiple intersecting crosshole gound-penetrating radar (GPR) velocity tomograms and neutron porosity logs. The benefit of using GPR data is assessed by investigating their ability, when included in conditional simulation, to predict porosity log data withheld from the simulation. Results show that the use of crosshole GPR data can significantly improve the estimation of porosity spatial distribution and reduce associated uncertainty compared to using only well log measurements for the estimation. The amount of benefit depends primarily on the strength of the petrophysical relation between the GPR and porosity data, the variability of this relation throughout the investigated site, and lateral structural continuity at the site.
Material growth in thermoelastic continua: Theory, algorithmics, and simulation
NASA Astrophysics Data System (ADS)
Vignes, Chet Monroe
Within the medical community, there has been increasing interest in understanding material growth in biomaterials. Material growth is the capability of a biomaterial to gain or lose mass. This research interest is driven by the host of health implications and medical problems related to this unique biomaterial property. Health providers are keen to understand the role of growth in healing and recovery so that surgical techniques, medical procedures, and physical therapy may be designed and implemented to stimulate healing and minimize recovery time. With this motivation, research seeks to identify and model mechanisms of material growth as well as growth-inducing factors in biomaterials. To this end, a theoretical formulation of stress-induced volumetric material growth in thermoelastic continua is developed. The theory derives, without the classical continuum mechanics assumption of mass conservation, the balance laws governing the mechanics of solids capable of growth. Also, a proposed extension of classical thermodynamic theory provides a foundation for developing general constitutive relations. The theory is consistent in the sense that classical thermoelastic continuum theory is embedded as a special case. Two growth mechanisms, a kinematic and a constitutive contribution, coupled in the most general case of growth, are identified. This identification allows for the commonly employed special cases of density-preserving growth and volume-preserving growth to be easily recovered. In the theory, material growth is regulated by a three-surface activation criterion and corresponding flow rules. A simple model for rate-independent finite growth is proposed based on this formulation. The associated algorithmic implementation, including a method for solving the underlying differential/algebraic equations for growth, is examined in the context of an implicit finite element method. Selected numerical simulations are presented that showcase the predictive capacity of the
A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Rao, Hariprasad Nannapaneni
1989-01-01
The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.
Li, Xiang; Zhang, Pengpeng; Brisman, Ronald; Kutcher, Gerald
2005-07-01
Studies suggest that clinical outcomes are improved in repeat trigeminal neuralgia (TN) Gamma Knife radiosurgery if a different part of the nerve from the previous radiosurgery is treated. The MR images taken in the first and repeat radiosurgery need to be coregistered to map the first radiosurgery volume onto the second treatment planning image. We propose a fully automatic and robust three-dimensional (3-D) mutual information- (MI-) based registration method engineered by a simulated annealing (SA) optimization technique. Commonly, Powell's method and Downhill simplex (DS) method are most popular in optimizing the MI objective function in medical image registration applications. However, due to the nonconvex property of the MI function, robustness of those two methods is questionable, especially for our cases, where only 28 slices of MR T1 images were utilized. Our SA method obtained successful registration results for all the 41 patients recruited in this study. On the other hand, Powell's method and the DS method failed to provide satisfactory registration for 11 patients and 9 patients, respectively. The overlapping volume ratio (OVR) is defined to quantify the degree of the partial volume overlap between the first and second MR scan. Statistical results from a logistic regression procedure demonstrated that the probability of a success of Powell's method tends to decrease as OVR decreases. The rigid registration with Powell's or the DS method is not suitable for the TN radiosurgery application, where OVR is likely to be low. In summary, our experimental results demonstrated that the MI-based registration method with the SA optimization technique is a robust and reliable option when the number of slices in the imaging study is limited. PMID:16121594
Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.
1997-01-01
The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.
D-leaping: Accelerating stochastic simulation algorithms for reactions with delays
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2009-09-01
We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.
Enhanced quasi-static PIC simulation with pipelining algorithm for e-cloud instability
NASA Astrophysics Data System (ADS)
Feng, Bing; Huang, Chengkun; Decyk, Viktor; Mori, Warren; Muggli, Patric; Katsouleas, Tom
2008-11-01
Simulating the electron cloud effect on a beam that circulates thousands of turns in circular machines is highly computationally demanding. A novel algorithm, the pipelining algorithm is applied to the fully parallelized quasi-static particle-in-cell code QuickPIC to overcome the limit of the maximum number of processors can be used for each time step. The pipelining algorithm divides the processors into subgroups and each subgroup focuses on different partition of the beam and performs the calculation in series. With this novel algorithm, the accuracy of the simulation is preserved; the speed of the simulation is improved by one order of magnitude with more than 10^2 processors are used. The long term simulation results of the CERN-LHC and the Main Injector at FNAL from the QuickPIC with pipelining algorithm are presented. This work is supported by SiDAC and US Department of Energy
NASA Technical Reports Server (NTRS)
Krosel, S. M.; Milner, E. J.
1982-01-01
The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.
A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations
Nukala, Phani K. V. V.; Kent, P. R. C.
2009-05-28
We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N{sup 2}) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N{sup 2}) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN{sup 2}) work and O(MN{sup 2}) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.
A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Nukala, Phani K. V. V.; Kent, P. R. C.
2009-05-01
We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N2) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N2) Sherman-Morrison algorithm for up to O(N ) updates. For multideterminant configuration-interaction-type trial wave functions of M +1 determinants, the new algorithm is significantly more efficient, saving both O(MN2) work and O(MN2) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.
A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations.
Nukala, Phani K V V; Kent, P R C
2009-05-28
We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N(2)) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N(2)) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN(2)) work and O(MN(2)) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions. PMID:19485435
A Fast and efficient Algorithm for Slater Determinant Updates in Quantum Monte Carlo Simulations
Nukala, Phani K; Kent, Paul R
2009-01-01
We present an efficient low-rank updating algorithm for updating the trial wavefunctions used in Quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is $\\mathcal{O}(k N)$ during the $k$-th step compared with traditional algorithms that require $\\mathcal{O}(N^2)$ computations, where $N$ is the system size. For single determinant trial wavefunctions the new algorithm is faster than the traditional $\\mathcal{O}(N^2)$ Sherman-Morrison algorithm for up to $\\mathcal{O}(N)$ updates. For multideterminant configuration-interaction type trial wavefunctions of $M+1$ determinants, the new algorithm is significantly more efficient, saving both $\\mathcal{O}(MN^2)$ work and $\\mathcal{O}(MN^2)$ storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration interaction type wavefunctions.
Fast combinatorial optimization using generalized deterministic annealing
NASA Astrophysics Data System (ADS)
Acton, Scott T.; Ghosh, Joydeep; Bovik, Alan C.
1993-08-01
Generalized Deterministic Annealing (GDA) is a useful new tool for computing fast multi-state combinatorial optimization of difficult non-convex problems. By estimating the stationary distribution of simulated annealing (SA), GDA yields equivalent solutions to practical SA algorithms while providing a significant speed improvement. Using the standard GDA, the computational time of SA may be reduced by an order of magnitude, and, with a new implementation improvement, Windowed GDA, the time improvements reach two orders of magnitude with a trivial compromise in solution quality. The fast optimization of GDA has enabled expeditious computation of complex nonlinear image enhancement paradigms, such as the Piecewise Constant (PICO) regression examples used in this paper. To validate our analytical results, we apply GDA to the PICO regression problem and compare the results to other optimization methods. Several full image examples are provided that show successful PICO image enhancement using GDA in the presence of both Laplacian and Gaussian additive noise.
Synchronization Algorithms for Co-Simulation of Power Grid and Communication Networks
Ciraci, Selim; Daily, Jeffrey A.; Agarwal, Khushbu; Fuller, Jason C.; Marinovici, Laurentiu D.; Fisher, Andrew R.
2014-09-11
The ongoing modernization of power grids consists of integrating them with communication networks in order to achieve robust and resilient control of grid operations. To understand the operation of the new smart grid, one approach is to use simulation software. Unfortunately, current power grid simulators at best utilize inadequate approximations to simulate communication networks, if at all. Cooperative simulation of specialized power grid and communication network simulators promises to more accurately reproduce the interactions of real smart grid deployments. However, co-simulation is a challenging problem. A co-simulation must manage the exchange of informa- tion, including the synchronization of simulator clocks, between all simulators while maintaining adequate computational perfor- mance. This paper describes two new conservative algorithms for reducing the overhead of time synchronization, namely Active Set Conservative and Reactive Conservative. We provide a detailed analysis of their performance characteristics with respect to the current state of the art including both conservative and optimistic synchronization algorithms. In addition, we provide guidelines for selecting the appropriate synchronization algorithm based on the requirements of the co-simulation. The newly proposed algorithms are shown to achieve as much as 14% and 63% im- provement, respectively, over the existing conservative algorithm.
Model predictive driving simulator motion cueing algorithm with actuator-based constraints
NASA Astrophysics Data System (ADS)
Garrett, Nikhil J. I.; Best, Matthew C.
2013-08-01
The simulator motion cueing problem has been considered extensively in the literature; approaches based on linear filtering and optimal control have been presented and shown to perform reasonably well. More recently, model predictive control (MPC) has been considered as a variant of the optimal control approach; MPC is perhaps an obvious candidate for motion cueing due to its ability to deal with constraints, in this case the platform workspace boundary. This paper presents an MPC-based cueing algorithm that, unlike other algorithms, uses the actuator positions and velocities as the constraints. The result is a cueing algorithm that can make better use of the platform workspace whilst ensuring that its bounds are never exceeded. The algorithm is shown to perform well against the classical cueing algorithm and an algorithm previously proposed by the authors, both in simulation and in tests with human drivers.
An advanced dispatch simulator with advanced dispatch algorithm
Kafka, R.J.; Crim, H.G. Jr. ); Fink, L.H. ); Balu, N.J. . Electrical Systems Div.)
1989-10-01
This article describes the development of an automatic generation control algorithm, which is capable of using accurate real-time unit data and has control performance advantages over existing algorithms. Utilities use automatic generation control to match total generation and total laod at minimum cost. Since it is impractical to measure total load directly, it is determined from total generation for a control area, the total tie-line flow error, and a component proportional to the frequency error. One part of the AGC system then assigns this total generation requirement to all the generators in the control area in an economic manner by using an economic dispatch algorithm. Another part of the AGC system keeps track of each generator and attempts to correct individual unit errors and total system errors that can be caused by unit response problems, normal changes in system load, metering errors, and system disturbances.
Zhang, Hongjun; Zhang, Rui; Li, Yong; Zhang, Xuliang
2014-01-01
Service oriented modeling and simulation are hot issues in the field of modeling and simulation, and there is need to call service resources when simulation task workflow is running. How to optimize the service resource allocation to ensure that the task is complete effectively is an important issue in this area. In military modeling and simulation field, it is important to improve the probability of success and timeliness in simulation task workflow. Therefore, this paper proposes an optimization algorithm for multipath service resource parallel allocation, in which multipath service resource parallel allocation model is built and multiple chains coding scheme quantum optimization algorithm is used for optimization and solution. The multiple chains coding scheme quantum optimization algorithm is to extend parallel search space to improve search efficiency. Through the simulation experiment, this paper investigates the effect for the probability of success in simulation task workflow from different optimization algorithm, service allocation strategy, and path number, and the simulation result shows that the optimization algorithm for multipath service resource parallel allocation is an effective method to improve the probability of success and timeliness in simulation task workflow. PMID:24963506
Improved delay-leaping simulation algorithm for biochemical reaction systems with delays
NASA Astrophysics Data System (ADS)
Yi, Na; Zhuang, Gang; Da, Liang; Wang, Yifei
2012-04-01
In biochemical reaction systems dominated by delays, the simulation speed of the stochastic simulation algorithm depends on the size of the wait queue. As a result, it is important to control the size of the wait queue to improve the efficiency of the simulation. An improved accelerated delay stochastic simulation algorithm for biochemical reaction systems with delays, termed the improved delay-leaping algorithm, is proposed in this paper. The update method for the wait queue is effective in reducing the size of the queue as well as shortening the storage and access time, thereby accelerating the simulation speed. Numerical simulation on two examples indicates that this method not only obtains a more significant efficiency compared with the existing methods, but also can be widely applied in biochemical reaction systems with delays.
An algorithm for fast DNS cavitating flows simulations using homogeneous mixture approach
NASA Astrophysics Data System (ADS)
Žnidarčič, A.; Coutier-Delgosha, O.; Marquillie, M.; Dular, M.
2015-12-01
A new algorithm for fast DNS cavitating flows simulations is developed. The algorithm is based on Kim and Moin projection method form. Homogeneous mixture approach with transport equation for vapour volume fraction is used to model cavitation and various cavitation models can be used. Influence matrix and matrix diagonalisation technique enable fast parallel computations.
NASA Astrophysics Data System (ADS)
Wu, Zhi-Xu; Bai, Hua; Cui, Xiang-Qun
2015-05-01
The wavefront measuring range and recovery precision of a curvature sensor can be improved by an intensity compensation algorithm. However, in a focal system with a fast f-number, especially a telescope with a large field of view, the accuracy of this algorithm cannot meet the requirements. A theoretical analysis of the corrected intensity compensation algorithm in a focal system with a fast f-number is first introduced and afterwards the mathematical equations used in this algorithm are expressed. The corrected result is then verified through simulation. The method used by such a simulation can be described as follows. First, the curvature signal from a focal system with a fast f-number is simulated by Monte Carlo ray tracing; then the wavefront result is calculated by the inner loop of the FFT wavefront recovery algorithm and the outer loop of the intensity compensation algorithm. Upon comparing the intensity compensation algorithm of an ideal system with the corrected intensity compensation algorithm, we reveal that the recovered precision of the curvature sensor can be greatly improved by the corrected intensity compensation algorithm. Supported by the National Natural Science Foundation of China.
DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM
The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...
Monte Carlo algorithm for simulating fermions on Lefschetz thimbles
NASA Astrophysics Data System (ADS)
Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo
2016-01-01
A possible solution of the notorious sign problem preventing direct Monte Carlo calculations for systems with nonzero chemical potential is to deform the integration region in the complex plane to a Lefschetz thimble. We investigate this approach for a simple fermionic model. We introduce an easy to implement Monte Carlo algorithm to sample the dominant thimble. Our algorithm relies only on the integration of the gradient flow in the numerically stable direction, which gives it a distinct advantage over the other proposed algorithms. We demonstrate the stability and efficiency of the algorithm by applying it to an exactly solvable fermionic model and compare our results with the analytical ones. We report a very good agreement for a certain region in the parameter space where the dominant contribution comes from a single thimble, including a region where standard methods suffer from a severe sign problem. However, we find that there are also regions in the parameter space where the contribution from multiple thimbles is important, even in the continuum limit.
Optical simulation of quantum algorithms using programmable liquid-crystal displays
Puentes, Graciana; La Mela, Cecilia; Ledesma, Silvia; Iemmi, Claudio; Paz, Juan Pablo; Saraceno, Marcos
2004-04-01
We present a scheme to perform an all optical simulation of quantum algorithms and maps. The main components are lenses to efficiently implement the Fourier transform and programmable liquid-crystal displays to introduce space dependent phase changes on a classical optical beam. We show how to simulate Deutsch-Jozsa and Grover's quantum algorithms using essentially the same optical array programmed in two different ways.
Piloted simulation of an algorithm for onboard control of time-optimal intercept
NASA Technical Reports Server (NTRS)
Price, D. B.; Calise, A. J.; Moerder, D. D.
1985-01-01
A piloted simulation of algorithms for onboard computation of trajectories for time-optimal intercept of a moving target by an F-8 aircraft is described. The algorithms, use singular perturbation techniques, generate commands in the cockpit. By centering the horizontal and vertical needles, the pilot flies an approximation to a time-optimal intercept trajectory. Example simulations are shown and statistical data on the pilot's performance when presented with different display and computation modes are described.
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks
Vestergaard, Christian L.; Génois, Mathieu
2015-01-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860
Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations
Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer
2013-09-01
Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both
NASA Astrophysics Data System (ADS)
Droeger, J.; Burchard, M.; Lattard, D.
2011-12-01
Amorphous silicates of olivine and pyroxene composition are thought to be common constituents of circumstellar, interstellar, and interplanetary dust. In proto-planetary discs amorphous dust crystallize essentially as a result of thermal annealing. The present project aims at deciphering the kinetics of crystallization pyroxene in proto-planetary dust on the basis of experiments on amorphous thin films. The thin films are deposited on Si-wafers (111) by pulsed laser deposition (PLD). The thin films are completely amorphous, chemically homogeneous (on the MgSiO3 composition) and with a continuous and flat surface. They are subsequently annealed for 1 to 216 h at 1073K and 1098K in a vertical quench furnace and drop-quenched on a copper block. To monitor the progress of crystallization, the samples are characterized by AFM and SEM imaging and IR spectroscopy. After short annealing durations (1 to 12 h) AFM and SE imaging reveal small shallow polygonal features (diameter 0.5-1 μm; height 2-3 nm) evenly distributed at the otherwise flat surface of the thin films. These shallow features are no longer visible after about 3 h at 1098 K, resp. >12 h at 1073 K. Meanwhile, two further types of features appear small protruding pyramids and slightly depressed spherolites. The orders of appearance of these features depend on temperature, but both persist and steadily grow with increasing annealing duration. Their sizes can reach about 12 μm. From TEM investigations on annealed thin films on the Mg2SiO4 composition we know that these features represent crystalline sites, which can be surrounded by a still amorphous matrix (Oehm et al. 2010). A quantitative evaluation of the size of the features will give insights on the progress of crystallization. IR spectra of the unprocessed thin films show only broad bands. In contrast, bands characteristic of crystalline enstatite are clearly recognizable in annealed samples, e.g. after 12 h at 1078 K. Small bands can also be assigned to
A Linac Simulation Code for Macro-Particles Tracking and Steering Algorithm Implementation
sun, yipeng
2012-05-03
In this paper, a linac simulation code written in Fortran90 is presented and several simulation examples are given. This code is optimized to implement linac alignment and steering algorithms, and evaluate the accelerator errors such as RF phase and acceleration gradient, quadrupole and BPM misalignment. It can track a single particle or a bunch of particles through normal linear accelerator elements such as quadrupole, RF cavity, dipole corrector and drift space. One-to-one steering algorithm and a global alignment (steering) algorithm are implemented in this code.
Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation
NASA Astrophysics Data System (ADS)
Sakumoto, Yusuke; Ohsaki, Hiroyuki; Imase, Makoto
In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M+N)log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M+N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(Mα(N)+N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.
Gotway, C.A.; Rutherford, B.M.
1993-09-01
Stochastic simulation has been suggested as a viable method for characterizing the uncertainty associated with the prediction of a nonlinear function of a spatially-varying parameter. Geostatistical simulation algorithms generate realizations of a random field with specified statistical and geostatistical properties. A nonlinear function is evaluated over each realization to obtain an uncertainty distribution of a system response that reflects the spatial variability and uncertainty in the parameter. Crucial management decisions, such as potential regulatory compliance of proposed nuclear waste facilities and optimal allocation of resources in environmental remediation, are based on the resulting system response uncertainty distribution. Many geostatistical simulation algorithms have been developed to generate the random fields, and each algorithm will produce fields with different statistical properties. These different properties will result in different distributions for system response, and potentially, different managerial decisions. The statistical properties of the resulting system response distributions are not completely understood, nor is the ability of the various algorithms to generate response distributions that adequately reflect the associated uncertainty. This paper reviews several of the algorithms available for generating random fields. Algorithms are compared in a designed experiment using seven exhaustive data sets with different statistical and geostatistical properties. For each exhaustive data set, a number of realizations are generated using each simulation algorithm. The realizations are used with each of several deterministic transfer functions to produce a cumulative uncertainty distribution function of a system response. The uncertainty distributions are then compared to the single value obtained from the corresponding exhaustive data set.
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input
NASA Astrophysics Data System (ADS)
Salter, Bill Jean, Jr.
Purpose. The advent of new, so called IVth Generation, external beam radiation therapy treatment machines (e.g. Scanditronix' MM50 Racetrack Microtron) has raised the question of how the capabilities of these new machines might be exploited to produce extremely conformal dose distributions. Such machines possess the ability to produce electron energies as high as 50 MeV and, due to their scanned beam delivery of electron treatments, to modulate intensity and even energy, within a broad field. Materials and methods. Two patients with 'challenging' tumor geometries were selected from the patient archives of the Cancer Therapy and Research Center (CTRC), in San Antonio Texas. The treatment scheme that was tested allowed for twelve, energy and intensity modulated beams, equi-spaced about the patient-only intensity was modulated for the photon treatment. The elementary beams, incident from any of the twelve allowed directions, were assumed parallel, and the elementary electron beams were modeled by elementary beam data. The optimal arrangement of elementary beam energies and/or intensities was optimized by Szu-Hartley Fast Simulated Annealing Optimization. Optimized treatment plans were determined for each patient using both the high energy, intensity and energy modulated electron (HIEME) modality, and the 6 MV photon modality. The 'quality' of rival plans were scored using three different, popular objective functions which included Root Mean Square (RMS), Maximize Dose Subject to Dose and Volume Limitations (MDVL - Morrill et. al.), and Probability of Uncomplicated Tumor Control (PUTC) methods. The scores of the two optimized treatments (i.e. HIEME and intensity modulated photons) were compared to the score of the conventional plan with which the patient was actually treated. Results. The first patient evaluated presented a deeply located target volume, partially surrounding the spinal cord. A healthy right kidney was immediately adjacent to the tumor volume, separated
On the rejection-based algorithm for simulation and analysis of large-scale reaction networks
NASA Astrophysics Data System (ADS)
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2015-06-01
Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.
On the rejection-based algorithm for simulation and analysis of large-scale reaction networks
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2015-06-28
Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.
A Contextual Fire Detection Algorithm for Simulated HJ-1B Imagery
Qian, Yonggang; Yan, Guangjian; Duan, Sibo; Kong, Xiangsheng
2009-01-01
The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m2 and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m2, only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m2, the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection. PMID:22399950
Efficient photoheating algorithms in time-dependent photoionization simulations
NASA Astrophysics Data System (ADS)
Lee, Kai-Yan; Mellema, Garrelt; Lundqvist, Peter
2016-02-01
We present an extension to the time-dependent photoionization code C2-RAY to calculate photoheating in an efficient and accurate way. In C2-RAY, the thermal calculation demands relatively small time-steps for accurate results. We describe two novel methods to reduce the computational cost associated with small time-steps, namely, an adaptive time-step algorithm and an asynchronous evolution approach. The adaptive time-step algorithm determines an optimal time-step for the next computational step. It uses a fast ray-tracing scheme to quickly locate the relevant cells for this determination and only use these cells for the calculation of the time-step. Asynchronous evolution allows different cells to evolve with different time-steps. The asynchronized clocks of the cells are synchronized at the times where outputs are produced. By only evolving cells which may require short time-steps with these short time-steps instead of imposing them to the whole grid, the computational cost of the calculation can be substantially reduced. We show that our methods work well for several cosmologically relevant test problems and validate our results by comparing to the results of another time-dependent photoionization code.
A fast algorithm for the simulation of arterial pulse waves
NASA Astrophysics Data System (ADS)
Du, Tao; Hu, Dan; Cai, David
2016-06-01
One-dimensional models have been widely used in studies of the propagation of blood pulse waves in large arterial trees. Under a periodic driving of the heartbeat, traditional numerical methods, such as the Lax-Wendroff method, are employed to obtain asymptotic periodic solutions at large times. However, these methods are severely constrained by the CFL condition due to large pulse wave speed. In this work, we develop a new numerical algorithm to overcome this constraint. First, we reformulate the model system of pulse wave propagation using a set of Riemann variables and derive a new form of boundary conditions at the inlet, the outlets, and the bifurcation points of the arterial tree. The new form of the boundary conditions enables us to design a convergent iterative method to enforce the boundary conditions. Then, after exchanging the spatial and temporal coordinates of the model system, we apply the Lax-Wendroff method in the exchanged coordinate system, which turns the large pulse wave speed from a liability to a benefit, to solve the wave equation in each artery of the model arterial system. Our numerical studies show that our new algorithm is stable and can perform ∼15 times faster than the traditional implementation of the Lax-Wendroff method under the requirement that the relative numerical error of blood pressure be smaller than one percent, which is much smaller than the modeling error.
Microwave holography of large reflector antennas - Simulation algorithms
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.
1985-01-01
The performance of large reflector antennas can be improved by identifying the location and amount of their surface distortions and correcting them. To determine the accuracy of the constructed surface profiles, simulation studies are used to incorporate both the effects of systematic and random distortions, particularly the effects of the displaced surface panels. In this paper, different simulation models are investigated, emphasizing a model based on the vector diffraction analysis of a curved reflector with displaced panels. The simulated far-field patterns are then used to reconstruct the location and amount of displacement of the surface panels by employing a fast Fourier transform/iterative procedure. The sensitivity of the microwave holography technique based on the number of far-field sampled points, level of distortions, polarizations, illumination tapers, etc., is also examined.
A process-based algorithm for simulating terraces in SWAT
Technology Transfer Automated Retrieval System (TEKTRAN)
Terraces in crop fields are one of the most important soil and water conservation measures that affect runoff and erosion processes in a watershed. In large hydrological programs such as the Soil and Water Assessment Tool (SWAT), terrace effects are simulated by adjusting the slope length and the US...
Simulating Multivariate Nonnormal Data Using an Iterative Algorithm
ERIC Educational Resources Information Center
Ruscio, John; Kaczetow, Walter
2008-01-01
Simulating multivariate nonnormal data with specified correlation matrices is difficult. One especially popular method is Vale and Maurelli's (1983) extension of Fleishman's (1978) polynomial transformation technique to multivariate applications. This requires the specification of distributional moments and the calculation of an intermediate…
Box length search algorithm for molecular simulation of systems containing periodic structures.
Schultz, A J; Hall, C K; Genzer, J
2004-01-22
We have developed a box length search algorithm to efficiently find the appropriate box dimensions for constant-volume molecular simulation of periodic structures. The algorithm works by finding the box lengths that equalize the pressure in each direction while maintaining constant total volume. Maintaining the volume at a fixed value ensures that quantitative comparisons can be made between simulation and experimental, theoretical or other simulation results for systems that are incompressible or nearly incompressible. We test the algorithm on a system of phase-separated block copolymers that has a preferred box length in one dimension. We also describe and test a Monte Carlo algorithm that allows the box lengths to change while maintaining constant volume. We find that the box length search algorithm converges at least two orders of magnitude more quickly than the variable box length Monte Carlo method. Although the box length search algorithm is not ergodic, it successfully finds the box length that minimizes the free energy of the system. We verify this by examining the free energy as determined by the Monte Carlo simulation. PMID:15268341
NASA Astrophysics Data System (ADS)
Endres, F.; Steinmann, P.
2014-12-01
Molecular dynamics (MD) simulations of ferroelectric materials have improved tremendously over the last few decades. Specifically, the core-shell model has been commonly used for the simulation of ferroelectric materials such as barium titanate. However, due to the computational costs of MD, the calculation of ferroelectric hysteresis behaviour, and especially the stress-strain relation, has been a computationally intense task. In this work a molecular statics algorithm, similar to a finite element method for nonlinear trusses, has been implemented. From this, an algorithm to calculate the stress dependent continuum deformation of a discrete particle system, such as a ferroelectric crystal, has been devised. Molecular statics algorithms for the atomistic simulation of ferroelectric materials have been previously described. However, in contrast to the prior literature the algorithm proposed in this work is also capable of effectively computing the macroscopic ferroelectric butterfly hysteresis behaviour. Therefore the advocated algorithm is able to calculate the piezoelectric effect as well as the converse piezoelectric effect simultaneously on atomistic and continuum length scales. Barium titanate has been simulated using the core-shell model to validate the developed algorithm.
Some Algorithms For Simulating Size-resolved Aerosol Dynamics Models
NASA Astrophysics Data System (ADS)
Debry, E.; Sportisse, B.
The objective of this presentation is to show some algorithms used to solve aerosol dynamics in 3D dispersion models. INTRODUCTION The gas phase pollution has been widely studied and some models are now available . The situation is quite different with respect to atmospheric aerosols . However at- mospheric particulate matter significantly influences atmospheric properties such as radiative balance, cloud formation, gas pollutants concentrations ( gas to particle con- version ), and has an impact on man health. As aerosols properties ( optical, hygroscopic, noxiousness ) depend mainly on their size, it appears important to be able to follow the aerosol ( or particle ) size distribution (PSD) during time. This former is modified by physical processes as coagulation, condensation or evaporation, nucleation and removal. Aerosol dynamics is usually modelized by the well-known General Dynamics Equation (GDE) [1]. MODELS Several models already exist to solve this equation. Multi-modal models are widely used [2] [3] because of the few parameters needed, but the GDE is solved only on its moments and the PSD is assumed to remain in a log-normal form. On the contrary, size-resolved models implies a discretization of the aerosol size spec- trum into several bins and to solve the GDE within each one. This step can be per- formed either by resolving each process separately ( splitting ), for example coagula- tion can be resolved by the well-known "size-binning" algorithms [4] and condensa- tion leads to an advection equation on the PSD [5], or by coupling all processes, what the finite elements [6] and stochastic methods [7] allows. Stochastic algorithms may not be competitive compared to deterministic ones with respect to the computation time, but they provide reference solutions useful to validate more operational codes on realistic cases, as analytic solutions of the GDE exist only for academic cases. REFERENCES [1] Seinfeld, J.H. and Pandis,S.N. Atmospheric chemistry and
Wu, Wei; Jiang, Fangming
2013-06-15
We adapt the simulated annealing approach for reconstruction of the 3D microstructure of a LiCoO{sub 2} cathode from a commercial Li-ion battery. The real size distribution curve of LiCoO{sub 2} particles is applied to regulate the reconstruction process. By discretizing a 40 × 40 × 40 μm cathode volume with 8,000,000 numerical cubes, the cathode involving three individual phases: 1) LiCoO{sub 2} as active material, 2) pores or electrolyte, and 3) additives (polyvinylidene fluoride + carbon black) is reconstructed. The microstructural statistical properties required in the reconstruction process are extracted from 2D focused ion beam/scanning electron microscopy images or obtained by analyzing the powder mixture used to make the cathode. Characterization of the reconstructed cathode gives important structural and transport properties including the two-point correlation functions, volume-specific surface area between phases, tortuosity and geometrical connectivity of individual phase. - Highlights: • Simulated annealing approach is adapted for 3D reconstruction of LiCoO{sub 2} cathode. • Real size distribution of LiCoO{sub 2} particles is applied in reconstruction process. • Reconstructed cathode accords with real one at important statistical properties. • Effective electrode-characterization approaches have been established. • Extensive characterization gives important structural properties, say, tortuosity.
An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm
Donev, A; Garcia, A L; Alder, B J
2007-07-30
A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.
NASA Astrophysics Data System (ADS)
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with
Stochastic algorithms for the analysis of numerical flame simulations
Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.
2004-04-26
Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.
Parallel algorithms for simulating continuous time Markov chains
NASA Technical Reports Server (NTRS)
Nicol, David M.; Heidelberger, Philip
1992-01-01
We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-04-25
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
NASA Astrophysics Data System (ADS)
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-05-01
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.
Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard
2012-06-01
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems. PMID:22697525
Grover search algorithm with Rydberg-blockaded atoms: quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Petrosyan, David; Saffman, Mark; Mølmer, Klaus
2016-05-01
We consider the Grover search algorithm implementation for a quantum register of size N={2}k using k (or k+1) microwave- and laser-driven Rydberg-blockaded atoms, following the proposal by Mølmer et al (2011 J. Phys. B 44 184016). We suggest some simplifications for the microwave and laser couplings, and analyze the performance of the algorithm for up to k = 4 multilevel atoms under realistic experimental conditions using quantum stochastic (Monte Carlo) wavefunction simulations.
A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Bui, Trong T.
1999-01-01
A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.
Cobb, J.W.; Leboeuf, J.N.
1994-10-01
The authors present a particle algorithm to extend simulation capabilities for plasma based materials processing reactors. The orbit integrator uses a syncopated leap-frog algorithm in cylindrical coordinates, which maintains second order accuracy, and minimizes computational complexity. Plasma source terms are accumulated orbit consistently directly in the frequency and azimuthal mode domains. Finally they discuss the numerical analysis of this algorithm. Orbit consistency greatly reduces the computational cost for a given level of precision. The computational cost is independent of the degree of time scale separation.
A conflict-free, path-level parallelization approach for sequential simulation algorithms
NASA Astrophysics Data System (ADS)
Rasera, Luiz Gustavo; Machado, Péricles Lopes; Costa, João Felipe C. L.
2015-07-01
Pixel-based simulation algorithms are the most widely used geostatistical technique for characterizing the spatial distribution of natural resources. However, sequential simulation does not scale well for stochastic simulation on very large grids, which are now commonly found in many petroleum, mining, and environmental studies. With the availability of multiple-processor computers, there is an opportunity to develop parallelization schemes for these algorithms to increase their performance and efficiency. Here we present a conflict-free, path-level parallelization strategy for sequential simulation. The method consists of partitioning the simulation grid into a set of groups of nodes and delegating all available processors for simulation of multiple groups of nodes concurrently. An automated classification procedure determines which groups are simulated in parallel according to their spatial arrangement in the simulation grid. The major advantage of this approach is that it does not require conflict resolution operations, and thus allows exact reproduction of results. Besides offering a large performance gain when compared to the traditional serial implementation, the method provides efficient use of computational resources and is generic enough to be adapted to several sequential algorithms.
Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms
Bosl, W J
2005-01-26
The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis
Foam flooding reservoir simulation algorithm improvement and application
NASA Astrophysics Data System (ADS)
Wang, Yining; Wu, Xiaodong; Wang, Ruihe; Lai, Fengpeng; Zhang, Hanhan
2014-05-01
As one of the important enhanced oil recovery (EOR) technologies, Foam flooding is being used more and more widely in the oil field development. In order to describe and predict foam flooding, experts at domestic and abroad have established a number of mathematical models of foam flooding (mechanism, empirical and semi-empirical models). Empirical models require less data and apply conveniently, but the accuracy is not enough. The aggregate equilibrium model can describe foam generation, burst and coalescence by mechanism studying, but it is very difficult to accurately describe. The research considers the effects of critical water saturation, critical concentration of foaming agent and critical oil saturation on the sealing ability of foam and considers the effect of oil saturation on the resistance factor for obtaining the gas phase relative permeability and the results were amended by laboratory test, so the accuracy rate is higher. Through the reservoir development concepts simulation and field practical application, the calculation is more accurate and higher.
A sweep algorithm for massively parallel simulation of circuit-switched networks
NASA Technical Reports Server (NTRS)
Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.
1992-01-01
A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.
Cao Yang . E-mail: ycao@cs.ucsb.edu; Gillespie, Dan . E-mail: GillespieDT@mailaps.org; Petzold, Linda . E-mail: petzold@engineering.ucsb.edu
2005-07-01
In this paper, we introduce a multiscale stochastic simulation algorithm (MSSA) which makes use of Gillespie's stochastic simulation algorithm (SSA) together with a new stochastic formulation of the partial equilibrium assumption (PEA). This method is much more efficient than SSA alone. It works even with a very small population of fast species. Implementation details are discussed, and an application to the modeling of the heat shock response of E. Coli is presented which demonstrates the excellent efficiency and accuracy obtained with the new method.
The Moment Condensed History Algorithm for Monte Carlo Electron Transport Simulations
Tolar, D R; Larsen, E W
2001-02-27
We introduce a new Condensed History algorithm for the Monte Carlo simulation of electron transport. To obtain more accurate simulations, the new algorithm preserves the mean position and the variance in the mean position exactly for electrons that have traveled a given path length and are traveling in a given direction. This is accomplished by deriving the zeroth-, first-, and second-order spatial moments of the Spencer-Lewis equation and employing this information directly in the Condensed History process. Numerical calculations demonstrate the advantages of our method over standard Condensed History methods.
An adaptive multi-level simulation algorithm for stochastic biological systems
NASA Astrophysics Data System (ADS)
Lester, C.; Yates, C. A.; Giles, M. B.; Baker, R. E.
2015-01-01
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the
An adaptive multi-level simulation algorithm for stochastic biological systems
Lester, C. Giles, M. B.; Baker, R. E.; Yates, C. A.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the
Quantum algorithms for spin models and simulable gate sets for quantum computation
NASA Astrophysics Data System (ADS)
van den Nest, M.; Dür, W.; Raussendorf, R.; Briegel, H. J.
2009-11-01
We present simple mappings between classical lattice models and quantum circuits, which provide a systematic formalism to obtain quantum algorithms to approximate partition functions of lattice models in certain complex-parameter regimes. We, e.g., present an efficient quantum algorithm for the six-vertex model as well as a two-dimensional Ising-type model. We show that classically simulating these (complex-parameter) spin models is as hard as simulating universal quantum computation, i.e., BQP complete (BQP denotes bounded-error quantum polynomial time). Furthermore, our mappings provide a framework to obtain efficiently simulable quantum gate sets from exactly solvable classical models. We, e.g., show that the simulability of Valiant’s match gates can be recovered by using the solvability of the free-fermion eight-vertex model.
NASA Astrophysics Data System (ADS)
Jones, G.; Gée, Ch.; Truchetet, F.
2007-01-01
In the context of precision agriculture, we present a robust and automatic method based on simulated images for evaluating the efficiency of any crop/weed discrimination algorithms for a inter-row weed infestation rate. To simulate these images two different steps are required: 1) modeling of a crop field from the spatial distribution of plants (crop and weed) 2) projection of the created field through an optical system to simulate photographing. Then an application is proposed investigating the accuracy and robustness of crop/weed discrimination algorithm combining a line detection (Hough transform) and a plant discrimination (crop and weeds). The accuracy of weed infestation rate estimate for each image is calculated by direct comparison to the initial weed infestation rate of the simulated images. It reveals an performance better than 85%.
Comparative Study of Algorithms for the Numerical Simulation of Lattice QCD
Luz, Fernando H. P.; Mendes, Tereza
2010-11-12
Large-scale numerical simulations are the prime method for a nonperturbative study of QCD from first principles. Although the lattice simulation of the pure-gauge (or quenched-QCD) case may be performed very efficiently on parallel machines, there are several additional difficulties in the simulation of the full-QCD case, i.e. when dynamical quark effects are taken into account. We discuss the main aspects of full-QCD simulations, describing the most common algorithms. We present a comparative analysis of performance for two versions of the hybrid Monte Carlo method (the so-called R and RHMC algorithms), as provided in the MILC software package. We consider two degenerate flavors of light quarks in the staggered formulation, having in mind the case of finite-temperature QCD.
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1989-01-01
The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.
NASA Astrophysics Data System (ADS)
Panzarino, Jason F.; Rupert, Timothy J.
2014-03-01
Atomistic simulations have become a powerful tool in materials research due to the extremely fine spatial and temporal resolution provided by such techniques. To understand the fundamental principles that govern material behavior at the atomic scale and directly connect to experimental works, it is necessary to quantify the microstructure of materials simulated with atomistics. Specifically, quantitative tools for identifying crystallites, their crystallographic orientation, and overall sample texture do not currently exist. Here, we develop a post-processing algorithm capable of characterizing such features, while also documenting their evolution during a simulation. In addition, the data is presented in a way that parallels the visualization methods used in traditional experimental techniques. The utility of this algorithm is illustrated by analyzing several types of simulation cells that are commonly found in the atomistic modeling literature but could also be applied to a variety of other atomistic studies that require precise identification and tracking of microstructure.
Chen, Zaigao; Wang, Jianguo; Wang, Yue; Qiao, Hailiang; Zhang, Dianhui; Guo, Weijie
2013-11-15
Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.
Scheduling Earth Observing Satellites with Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.
Kinetic simulation of fiber amplifier based on parallelizable and bidirectional algorithm
NASA Astrophysics Data System (ADS)
Chen, Haihuan; Yang, Huanbi; Wu, Wenhan
2015-10-01
The simulation of light waves propagating in fibers oppositely has to handle the extremely huge volume of data when employing sequential and unidirectional methods, where the simulation is in a coordinate system that moves along with the light waves. Therefore, alternative simulation algorithm should be used when calculating counter propagating light waves. Parallelizable and bidirectional (PB) algorithm simulates the light waves matching in time domain instead of space domain, does not need iteration, and permits efficient parallelization on multiple processors. The PB method is proposed to calculate the propagation of dispersing Gaussian pulse and a bit stream in fibers. However, PB method also has apparent advantages when simulating pulses in fiber laser amplifiers, which has not been investigated detailed yet. In this paper, we perform the simulation of pulses in a rare-earth-ions doped fiber amplifier. The influence of pump power, signal power, repetition rate, pulse width and fiber length on the amplifier's output average power, peak power, pulse energy and pulse shape are investigated. The results indicate that the PB method is effective when simulating high power amplification of pulses in fiber amplifier. Furthermore, nonlinear effects can be added into the simulation conveniently. The work in this paper will provide a more economic and efficient method to simulate power amplification of fiber lasers.
A new second-order integration algorithm for simulating mechanical dynamic systems
NASA Technical Reports Server (NTRS)
Howe, R. M.
1989-01-01
A new integration algorithm which has the simplicity of Euler integration but exhibits second-order accuracy is described. In fixed-step numerical integration of differential equations for mechanical dynamic systems the method represents displacement and acceleration variables at integer step times and velocity variables at half-integer step times. Asymptotic accuracy of the algorithm is twice that of trapezoidal integration and ten times that of second-order Adams-Bashforth integration. The algorithm is also compatible with real-time inputs when used for a real-time simulation. It can be used to produce simulation outputs at double the integration frame rate, i.e., at both half-integer and integer frame times, even though it requires only one evaluation of state-variable derivatives per integration step. The new algorithm is shown to be especially effective in the simulation of lightly-damped structural modes. Both time-domain and frequency-domain accuracy comparisons with traditional integration methods are presented. Stability of the new algorithm is also examined.
Hierarchical tree algorithm for collisional N-body simulations on GRAPE
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki; Kawai, Atsushi
2016-06-01
We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional N-body simulations, running on the GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such a combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all the particle data and also the tree node data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on the host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct N2 summation in the original Hermite scheme. For example, we can achieve about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of an N = 106 system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme.
Hierarchical tree algorithm for collisional N-body simulations on GRAPE
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki; Kawai, Atsushi
2016-03-01
We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional N-body simulations, running on the GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such a combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all the particle data and also the tree node data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on the host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct N2 summation in the original Hermite scheme. For example, we can achieve about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of an N = 106 system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme.
Navier-Stokes simulation of wind-tunnel flow using LU-ADI factorization algorithm
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru; Fujii, Kozo; Gavali, Sharad
1988-01-01
The three dimensional Navier-Stokes solution code using the LU-ADI factorization algorithm was employed to simulate the workshop test cases of transonic flow past a wing model in a wind tunnel and in free air. The effect of the tunnel walls is well demonstrated by the present simulations. An Amdahl 1200 supercomputer having 128 Mbytes main memory was used for these computations.
Simulation of a navigator algorithm for a low-cost GPS receiver
NASA Technical Reports Server (NTRS)
Hodge, W. F.
1980-01-01
The analytical structure of an existing navigator algorithm for a low cost global positioning system receiver is described in detail to facilitate its implementation on in-house digital computers and real-time simulators. The material presented includes a simulation of GPS pseudorange measurements, based on a two-body representation of the NAVSTAR spacecraft orbits, and a four component model of the receiver bias errors. A simpler test for loss of pseudorange measurements due to spacecraft shielding is also noted.
Sensitivity of CO2 Simulation in a GCM to the Convective Transport Algorithms
NASA Technical Reports Server (NTRS)
Zhu, Z.; Pawson, S.; Collatz, G. J.; Gregg, W. W.; Kawa, S. R.; Baker, D.; Ott, L.
2014-01-01
Convection plays an important role in the transport of heat, moisture and trace gases. In this study, we simulated CO2 concentrations with an atmospheric general circulation model (GCM). Three different convective transport algorithms were used. One is a modified Arakawa-Shubert scheme that was native to the GCM; two others used in two off-line chemical transport models (CTMs) were added to the GCM here for comparison purposes. Advanced CO2 surfaced fluxes were used for the simulations. The results were compared to a large quantity of CO2 observation data. We find that the simulation results are sensitive to the convective transport algorithms. Overall, the three simulations are quite realistic and similar to each other in the remote marine regions, but are significantly different in some land regions with strong fluxes such as Amazon and Siberia during the convection seasons. Large biases against CO2 measurements are found in these regions in the control run, which uses the original GCM. The simulation with the simple diffusive algorithm is better. The difference of the two simulations is related to the very different convective transport speed.
A piloted simulator evaluation of a ground-based 4D descent advisor algorithm
NASA Technical Reports Server (NTRS)
Green, Steven M.; Davis, Thomas J.; Erzberger, Heinz
1987-01-01
A ground-based, four-dimensional (4D) descent-advisor algorithm is under development at NASA Ames Research Center. The algorithm combines detailed aerodynamic, propulsive, and atmospheric models with an efficient numerical integration scheme to generate 4D descent advisories. This paper investigates the ability of the 4D descent advisor algorithm to provide adequate control of arrival time for aircraft not equipped with on-board 4D guidance systems. A piloted simulation was conducted to determine the precision with which the descent advisor could predict the 4D trajectories of typical straight-in descents flown by airline pilots under different wind conditions. The effects of errors in the estimation of wind and initial aircraft weight were also studied. A description of the descent advisor as well as the results of the simulation studies are presented.
A piloted simulator evaluation of a ground-based 4-D descent advisor algorithm
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Green, Steven M.; Erzberger, Heinz
1990-01-01
A ground-based, four dimensional (4D) descent-advisor algorithm is under development at NASA-Ames. The algorithm combines detailed aerodynamic, propulsive, and atmospheric models with an efficient numerical integration scheme to generate 4D descent advisories. The ability is investigated of the 4D descent advisor algorithm to provide adequate control of arrival time for aircraft not equipped with on-board 4D guidance systems. A piloted simulation was conducted to determine the precision with which the descent advisor could predict the 4D trajectories of typical straight-in descents flown by airline pilots under different wind conditions. The effects of errors in the estimation of wind and initial aircraft weight were also studied. A description of the descent advisor as well as the result of the simulation studies are presented.
A novel algorithm for non-bonded-list updating in molecular simulations.
Maximova, Tatiana; Keasar, Chen
2006-06-01
Simulations of molecular systems typically handle interactions within non-bonded pairs. Generating and updating a list of these pairs can be the most time-consuming part of energy calculations for large systems. Thus, efficient non-bonded list processing can speed up the energy calculations significantly. While the asymptotic complexity of current algorithms (namely O(N), where N is the number of particles) is probably the lowest possible, a wide space for optimization is still left. This article offers a heuristic extension to the previously suggested grid based algorithms. We show that, when the average particle movements are slow, simulation time can be reduced considerably. The proposed algorithm has been implemented in the DistanceMatrix class of the molecular modeling package MESHI. MESHI is freely available at
R-leaping: Accelerating the stochastic simulation algorithm by reaction leaps
NASA Astrophysics Data System (ADS)
Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros
2006-08-01
A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.
Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm
Thanh, Vo Hong; Priami, Corrado
2015-08-07
We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reaction rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.
Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm
NASA Astrophysics Data System (ADS)
Feng, B.; Huang, C.; Decyk, V.; Mori, W. B.; Katsouleas, T.; Muggli, P.
2006-11-01
Modeling long timescale propagation of beams in plasma wakefield accelerators at the energy frontier and in electron clouds in circular accelerators such as CERN-LHC requires faster and more efficient simulation codes. Simply increasing the number of processors does not scale beyond one-fifth of the number of cells in the decomposition direction. A pipelining algorithm applied on fully parallelized code QuickPIC is suggested to overcome this limit. The pipelining algorithm uses many groups of processors and optimizes the job allocation on the processors in parallel computing. With the new algorithm, it is possible to use on the order of 102 groups of processors, expanding the scale and speed of simulations with QuickPIC by a similar factor.
Enhancing plasma wakefield and e-cloud simulation performance using a pipelining algorithm
NASA Astrophysics Data System (ADS)
Feng, Bing; Katsouleas, Tom; Huang, Chengkun; Decyk, Viktor; Mori, Warren B.
2006-10-01
Modeling long timescale propagation of beams in plasma wakefield accelerators at the energy frontier and in electron clouds in circular accelerators such as CERN-LHC require a faster and more efficient simulation code. Simply increasing the number of processors does not scale beyond one-fifth of the number of cells in the decomposition direction. A pipelining algorithm applied on fully parallel code QUICKPIC is suggested to overcome this limit. The pipelining algorithm uses many groups of processors and optimizes the job allocation on the processors in parallel computing. With the new algorithm, it is possible to use on the order of 100 groups of processors, expanding the scale and speed of simulations with QuickPIC by a similar factor.
Simulating the time-dependent Schr"odinger equation with a quantum lattice-gas algorithm
NASA Astrophysics Data System (ADS)
Prezkuta, Zachary; Coffey, Mark
2007-03-01
Quantum computing algorithms promise remarkable improvements in speed or memory for certain applications. Currently, the Type II (or hybrid) quantum computer is the most feasible to build. This consists of a large number of small Type I (pure) quantum computers that compute with quantum logic, but communicate with nearest neighbors in a classical way. The arrangement thus formed is suitable for computations that execute a quantum lattice gas algorithm (QLGA). We report QLGA simulations for both the linear and nonlinear time-dependent Schr"odinger equation. These evidence the stable, efficient, and at least second order convergent properties of the algorithm. The simulation capability provides a computational tool for applications in nonlinear optics, superconducting and superfluid materials, Bose-Einstein condensates, and elsewhere.
Efficient parallel algorithm for statistical ion track simulations in crystalline materials
NASA Astrophysics Data System (ADS)
Jeon, Byoungseon; Grønbech-Jensen, Niels
2009-02-01
We present an efficient parallel algorithm for statistical Molecular Dynamics simulations of ion tracks in solids. The method is based on the Rare Event Enhanced Domain following Molecular Dynamics (REED-MD) algorithm, which has been successfully applied to studies of, e.g., ion implantation into crystalline semiconductor wafers. We discuss the strategies for parallelizing the method, and we settle on a host-client type polling scheme in which a multiple of asynchronous processors are continuously fed to the host, which, in turn, distributes the resulting feed-back information to the clients. This real-time feed-back consists of, e.g., cumulative damage information or statistics updates necessary for the cloning in the rare event algorithm. We finally demonstrate the algorithm for radiation effects in a nuclear oxide fuel, and we show the balanced parallel approach with high parallel efficiency in multiple processor configurations.
SIMULATION OF AEROSOL DYNAMICS: A COMPARATIVE REVIEW OF ALGORITHMS USED IN AIR QUALITY MODELS
A comparative review of algorithms currently used in air quality models to simulate aerosol dynamics is presented. This review addresses coagulation, condensational growth, nucleation, and gas/particle mass transfer. Two major approaches are used in air quality models to repres...
A three-dimensional spectral algorithm for simulations of transition and turbulence
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1985-01-01
A spectral algorithm for simulating three dimensional, incompressible, parallel shear flows is described. It applies to the channel, to the parallel boundary layer, and to other shear flows with one wall bounded and two periodic directions. Representative applications to the channel and to the heated boundary layer are presented.
Evaluation of effective-stress-function algorithm for nuclear fuel simulation
Kim, H. C.; Yang, Y. S.; Koo, Y. H.
2013-07-01
In a pressurized water reactor (PWR), the mechanical integrity of nuclear fuel is the most critical issue as it is an important barrier for fission products released into the environment. The integrity of zirconium cladding that surrounds uranium oxide can be threatened during off-normal operation owing to a pellet-cladding mechanical interaction (PCMI). To analyze the fuel and cladding behavior during off-operation, the fuel performance code should calculate an inelastic analysis in two - or three-dimensional calculations. In this paper, the effective stress function (ESF) algorithm based on a two-dimensional FE module has been implemented to simulate the inelastic behavior of the cladding with stability and accuracy. The ESF algorithm solves the governing equations of the inelastic constitutive behavior by calculating the zero of the appropriate effective-stress-function. To verify the accuracy of the ESF algorithm for an inelastic analysis, a code-to-code benchmark was performed using the commercial FE code, ANSYS 13.0. To demonstrate the stability and convergence of the implemented algorithm, the number of iterations in the ESF algorithm was compared with that in a sequential algorithm in the case of an inelastic problem. Consequently, the evaluation results demonstrate that the implemented ESF algorithm improves the efficiency of the computation without a loss of accuracy for an inelastic analysis. (authors)
Quantum Annealing at Google: Recent Learnings and Next Steps
NASA Astrophysics Data System (ADS)
Neven, Hartmut
Recently we studied optimization problems with rugged energy landscapes that featured tall and narrow energy barriers separating energy minima. We found that for a crafted problem of this kind, called the weak-strong cluster glass, the D-Wave 2X processor achieves a significant advantage in runtime scaling relative to Simulated Annealing (SA). For instances with 945 variables this results in a time-to-99%-success-probability 109 times shorter than SA running on a single core. When comparing to the Quantum Monte Carlo (QMC) algorithm we only observe a pre-factor advantage but the pre-factor is large, about 106 for an implementation on a single core. We should note that we expect QMC to scale like physical quantum annealing only for problems for which the tunneling transitions can be described by a dominant purely imaginary instanton. We expect these findings to carry over to other problems with similar energy landscapes. A class of practical interest are k-th order binary optimization problems. We studied 4-spin problems using numerical methods and found again that simulated quantum annealing has better scaling than SA. This leaves us with a final step to achieve a wall clock speedup of practical relevance. We need to develop an annealing architecture that supports embedding of k-th order binary optimization in a manner that preserves the runtime advantage seen prior to embedding.
Quantum algorithm for simulating the dynamics of an open quantum system
Wang Hefeng; Ashhab, S.; Nori, Franco
2011-06-15
In the study of open quantum systems, one typically obtains the decoherence dynamics by solving a master equation. The master equation is derived using knowledge of some basic properties of the system, the environment, and their interaction: One basically needs to know the operators through which the system couples to the environment and the spectral density of the environment. For a large system, it could become prohibitively difficult to even write down the appropriate master equation, let alone solve it on a classical computer. In this paper, we present a quantum algorithm for simulating the dynamics of an open quantum system. On a quantum computer, the environment can be simulated using ancilla qubits with properly chosen single-qubit frequencies and with properly designed coupling to the system qubits. The parameters used in the simulation are easily derived from the parameters of the system + environment Hamiltonian. The algorithm is designed to simulate Markovian dynamics, but it can also be used to simulate non-Markovian dynamics provided that this dynamics can be obtained by embedding the system of interest into a larger system that obeys Markovian dynamics. We estimate the resource requirements for the algorithm. In particular, we show that for sufficiently slow decoherence a single ancilla qubit could be sufficient to represent the entire environment, in principle.
Algorithm for Building a Spectrum for NREL's One-Sun Multi-Source Simulator: Preprint
Moriarty, T.; Emery, K.; Jablonski, J.
2012-06-01
Historically, the tools used at NREL to compensate for the difference between a reference spectrum and a simulator spectrum have been well-matched reference cells and the application of a calculated spectral mismatch correction factor, M. This paper describes the algorithm for adjusting the spectrum of a 9-channel fiber-optic-based solar simulator with a uniform beam size of 9 cm square at 1-sun. The combination of this algorithm and the One-Sun Multi-Source Simulator (OSMSS) hardware reduces NREL's current vs. voltage measurement time for a typical three-junction device from man-days to man-minutes. These time savings may be significantly greater for devices with more junctions.
NASA Technical Reports Server (NTRS)
Emmitt, G. D.; Wood, S. A.; Morris, M.
1990-01-01
Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.
A Monolithic Algorithm for High Reynolds Number Fluid-Structure Interaction Simulations
NASA Astrophysics Data System (ADS)
Lieberknecht, Erika; Sheldon, Jason; Pitt, Jonathan
2013-11-01
Simulations of fluid-structure interaction problems with high Reynolds number flows are typically approached with partitioned algorithms that leverage the robustness of traditional finite volume method based CFD techniques for flows of this nature. However, such partitioned algorithms are subject to many sub-iterations per simulation time-step, which substantially increases the computational cost when a tightly coupled solution is desired. To address this issue, we present a finite element method based monolithic algorithm for fluid-structure interaction problems with high Reynolds number flow. The use of a monolithic algorithm will potentially reduce the computational cost during each time-step, but requires that all of the governing equations be simultaneously cast in a single Arbitrary Lagrangian-Eulerian (ALE) frame of reference and subjected to the same discretization strategy. The formulation for the fluid solution is stabilized by implementing a Streamline Upwind Galerkin (SUPG) method, and a projection method for equal order interpolation of all of the solution unknowns; numerical and programming details are discussed. Preliminary convergence studies and numerical investigations are presented, to demonstrate the algorithm's robustness and performance. The authors acknowledge support for this project from the Applied Research Laboratory Eric Walker Graduate Fellowship Program.
NASA Astrophysics Data System (ADS)
Coleman, Shawn P.; Sichani, Mehrdad M.; Spearot, Douglas E.
2014-03-01
Electron and x-ray diffraction are well-established experimental methods used to explore the atomic scale structure of materials. In this work, a computational algorithm is developed to produce virtual electron and x-ray diffraction patterns directly from atomistic simulations. This algorithm advances beyond previous virtual diffraction methods by using a high-resolution mesh of reciprocal space that eliminates the need for a priori knowledge of the crystal structure being modeled or other assumptions concerning the diffraction conditions. At each point on the reciprocal space mesh, the diffraction intensity is computed via explicit computation of the structure factor equation. To construct virtual selected-area electron diffraction patterns, a hemispherical slice of the reciprocal lattice mesh lying on the surface of the Ewald sphere is isolated and viewed along a specified zone axis. X-ray diffraction line profiles are created by binning the intensity of each reciprocal lattice point by its associated scattering angle, effectively mimicking powder diffraction conditions. The virtual diffraction algorithm is sufficiently generic to be applied to atomistic simulations of any atomic species. In this article, the capability and versatility of the virtual diffraction algorithm is exhibited by presenting findings from atomistic simulations of <100> symmetric tilt Ni grain boundaries, nanocrystalline Cu models, and a heterogeneous interface formed between α-Al2O3 (0001) and γ-Al2O3 (111).
Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS
NASA Astrophysics Data System (ADS)
Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.
Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.
Fawley, William M.
2002-03-25
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
NASA Astrophysics Data System (ADS)
Fawley, William M.
2002-07-01
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multidimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
DSMC moving-boundary algorithms for simulating MEMS geometries with opening and closing gaps.
Gallis, Michail A.; Rader, Daniel John; Torczynski, John Robert
2010-06-01
Moving-boundary algorithms for the Direct Simulation Monte Carlo (DSMC) method are investigated for a microbeam that moves toward and away from a parallel substrate. The simpler but analogous one-dimensional situation of a piston moving between two parallel walls is investigated using two moving-boundary algorithms. In the first, molecules are reflected rigorously from the moving piston by performing the reflections in the piston frame of reference. In the second, molecules are reflected approximately from the moving piston by moving the piston and subsequently moving all molecules and reflecting them from the moving piston at its new or old position.
The small-voxel tracking algorithm for simulating chemical reactions among diffusing molecules
Gillespie, Daniel T. Gillespie, Carol A.; Seitaridou, Effrosyni
2014-12-21
Simulating the evolution of a chemically reacting system using the bimolecular propensity function, as is done by the stochastic simulation algorithm and its reaction-diffusion extension, entails making statistically inspired guesses as to where the reactant molecules are at any given time. Those guesses will be physically justified if the system is dilute and well-mixed in the reactant molecules. Otherwise, an accurate simulation will require the extra effort and expense of keeping track of the positions of the reactant molecules as the system evolves. One molecule-tracking algorithm that pays careful attention to the physics of molecular diffusion is the enhanced Green's function reaction dynamics (eGFRD) of Takahashi, Tănase-Nicola, and ten Wolde [Proc. Natl. Acad. Sci. U.S.A. 107, 2473 (2010)]. We introduce here a molecule-tracking algorithm that has the same theoretical underpinnings and strategic aims as eGFRD, but a different implementation procedure. Called the small-voxel tracking algorithm (SVTA), it combines the well known voxel-hopping method for simulating molecular diffusion with a novel procedure for rectifying the unphysical predictions of the diffusion equation on the small spatiotemporal scale of molecular collisions. Indications are that the SVTA might be more computationally efficient than eGFRD for the problematic class of non-dilute systems. A widely applicable, user-friendly software implementation of the SVTA has yet to be developed, but we exhibit some simple examples which show that the algorithm is computationally feasible and gives plausible results.
The small-voxel tracking algorithm for simulating chemical reactions among diffusing molecules
NASA Astrophysics Data System (ADS)
Gillespie, Daniel T.; Seitaridou, Effrosyni; Gillespie, Carol A.
2014-12-01
Simulating the evolution of a chemically reacting system using the bimolecular propensity function, as is done by the stochastic simulation algorithm and its reaction-diffusion extension, entails making statistically inspired guesses as to where the reactant molecules are at any given time. Those guesses will be physically justified if the system is dilute and well-mixed in the reactant molecules. Otherwise, an accurate simulation will require the extra effort and expense of keeping track of the positions of the reactant molecules as the system evolves. One molecule-tracking algorithm that pays careful attention to the physics of molecular diffusion is the enhanced Green's function reaction dynamics (eGFRD) of Takahashi, Tănase-Nicola, and ten Wolde [Proc. Natl. Acad. Sci. U.S.A. 107, 2473 (2010)]. We introduce here a molecule-tracking algorithm that has the same theoretical underpinnings and strategic aims as eGFRD, but a different implementation procedure. Called the small-voxel tracking algorithm (SVTA), it combines the well known voxel-hopping method for simulating molecular diffusion with a novel procedure for rectifying the unphysical predictions of the diffusion equation on the small spatiotemporal scale of molecular collisions. Indications are that the SVTA might be more computationally efficient than eGFRD for the problematic class of non-dilute systems. A widely applicable, user-friendly software implementation of the SVTA has yet to be developed, but we exhibit some simple examples which show that the algorithm is computationally feasible and gives plausible results.
Algorithm for simulation of quantum many-body dynamics using dynamical coarse-graining
NASA Astrophysics Data System (ADS)
Khasin, M.; Kosloff, R.
2010-04-01
An algorithm for simulation of quantum many-body dynamics having su(2) spectrum-generating algebra is developed. The algorithm is based on the idea of dynamical coarse-graining. The original unitary dynamics of the target observables—the elements of the spectrum-generating algebra—is simulated by a surrogate open-system dynamics, which can be interpreted as weak measurement of the target observables, performed on the evolving system. The open-system state can be represented by a mixture of pure states, localized in the phase space. The localization reduces the scaling of the computational resources with the Hilbert-space dimension n by factor n3/2(lnn)-1 compared to conventional sparse-matrix methods. The guidelines for the choice of parameters for the simulation are presented and the scaling of the computational resources with the Hilbert-space dimension of the system is estimated. The algorithm is applied to the simulation of the dynamics of systems of 2×104 and 2×106 cold atoms in a double-well trap, described by the two-site Bose-Hubbard model.
Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.
Marquez-Lago, Tatiana T; Burrage, Kevin
2007-09-14
In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity. PMID:17867731
NASA Astrophysics Data System (ADS)
Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.
2015-02-01
Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.
New conformational search method using genetic algorithm and knot theory for proteins.
Sakae, Y; Hiroyasu, T; Miki, M; Okamoto, Y
2011-01-01
We have proposed a parallel simulated annealing using genetic crossover as one of powerful conformational search methods, in order to find the global minimum energy structures for protein systems. The simulated annealing using genetic crossover method, which incorporates the attractive features of the simulated annealing and the genetic algorithm, is useful for finding a minimum potential energy conformation of protein systems. However, when we perform simulations by using this method, we often find obviously unnatural stable conformations, which have "knots" of a string of an amino-acid sequence. Therefore, we combined knot theory with our simulated annealing using genetic crossover method in order to avoid the knot conformations from the conformational search space. We applied this improved method to protein G, which has 56 amino acids. As the result, we could perform the simulations, which avoid knot conformations. PMID:21121049
A pencil beam algorithm for intensity modulated proton therapy derived from Monte Carlo simulations.
Soukup, Martin; Fippel, Matthias; Alber, Markus
2005-11-01
A pencil beam algorithm as a component of an optimization algorithm for intensity modulated proton therapy (IMPT) is presented. The pencil beam algorithm is tuned to the special accuracy requirements of IMPT, where in heterogeneous geometries both the position and distortion of the Bragg peak and the lateral scatter pose problems which are amplified by the spot weight optimization. Heterogeneity corrections are implemented by a multiple raytracing approach using fluence-weighted sub-spots. In order to derive nuclear interaction corrections, Monte Carlo simulations were performed. The contribution of long ranged products of nuclear interactions is taken into account by a fit to the Monte Carlo results. Energy-dependent stopping power ratios are also implemented. Scatter in optional beam line accessories such as range shifters or ripple filters is taken into account. The collimator can also be included, but without additional scattering. Finally, dose distributions are benchmarked against Monte Carlo simulations, showing 3%/1 mm agreement for simple heterogeneous phantoms. In the case of more complicated phantoms, principal shortcomings of pencil beam algorithms are evident. The influence of these effects on IMPT dose distributions is shown in clinical examples. PMID:16237243
Optimized simulations of Olami-Feder-Christensen systems using parallel algorithms
NASA Astrophysics Data System (ADS)
Dominguez, Rachele; Necaise, Rance; Montag, Eric
The sequential nature of the Olami-Feder-Christensen (OFC) model for earthquake simulations limits the benefits of parallel computing approaches because of the frequent communication required between processors. We developed a parallel version of the OFC algorithm for multi-core processors. Our data, even for relatively small system sizes and low numbers of processors, indicates that increasing the number of processors provides significantly faster simulations; producing more efficient results than previous attempts that used network-based Beowulf clusters. Our algorithm optimizes performance by exploiting the multi-core processor architecture, minimizing communication time in contrast to the networked Beowulf-cluster approaches. Our multi-core algorithm is the basis for a new algorithm using GPUs that will drastically increase the number of processors available. Previous studies incorporating realistic structural features of faults into OFC models have revealed spatial and temporal patterns observed in real earthquake systems. The computational advances presented here will allow for studying interacting networks of faults, rather than individual faults, further enhancing our understanding of the relationship between the earth's structure and the triggering process. Support for this project comes from the Chenery Research Fund, the Rashkind Family Endowment, the Walter Williams Craigie Teaching Endowment, and the Schapiro Undergraduate Research Fellowship.
NASA Astrophysics Data System (ADS)
Deng, Guobao; Zhu, Xiangping
2015-02-01
The development decoding algorithms of two-dimensional cross strip anodes image readouts for applications in UV astronomy are described. We present results with Monte Carlo simulation by GEANT4 toolkit, the results show that when the cross strip anode period is 0.5mm and the electrode width is 0.4mm, the spatial resolution accuracy is sufficient to reach better than 5 μm, the temporal resolution accuracy of the event detection can be as low as 100 ps. The influences of the cross strip detector parameters, such as the anode period, the width of anode fingers (electrode), the width of the charge footprint at the anode (determined by the distance and the field between the MCP and the anode), the gain of the MCP and equivalent noise charge (ENC) are also discussed. The development decoding algorithms and simulation results can be useful for the designing and performance improvement of future photon counting imaging detectors for UV Astronomy.
An improved real-time endovascular guidewire position simulation using shortest path algorithm.
Qiu, Jianpeng; Qu, Zhiyi; Qiu, Haiquan; Zhang, Xiaomin
2016-09-01
In this study, we propose a new graph-theoretical method to simulate guidewire paths inside the carotid artery. The minimum energy guidewire path can be obtained by applying the shortest path algorithm, such as Dijkstra's algorithm for graphs, based on the principle of the minimal total energy. Compared to previous results, experiments of three phantoms were validated, revealing that the first and second phantoms overlap completely between simulated and real guidewires. In addition, 95 % of the third phantom overlaps completely, and the remaining 5 % closely coincides. The results demonstrate that our method achieves 87 and 80 % improvements for the first and third phantoms under the same conditions, respectively. Furthermore, 91 % improvements were obtained for the second phantom under the condition with reduced graph construction complexity. PMID:26467345
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
A Fourier analysis for a fast simulation algorithm. [for switching converters
NASA Technical Reports Server (NTRS)
King, Roger J.
1988-01-01
This paper presents a derivation of compact expressions for the Fourier series analysis of the steady-state solution of a typical switching converter. The modeling procedure for the simulation and the steady-state solution is described, and some desirable traits for its matrix exponential subroutine are discussed. The Fourier analysis algorithm was tested on a phase-controlled parallel-loaded resonant converter, providing an experimental confirmation.
A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.
2005-01-01
This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.
NASA Astrophysics Data System (ADS)
Tichý, Vladimír; Hudec, René; Němcová, Šárka
2016-06-01
The algorithm presented is intended mainly for lobster eye optics. This type of optics (and some similar types) allows for a simplification of the classical ray-tracing procedure that requires great many rays to simulate. The method presented performs the simulation of a only few rays; therefore it is extremely effective. Moreover, to simplify the equations, a specific mathematical formalism is used. Only a few simple equations are used, therefore the program code can be simple as well. The paper also outlines how to apply the method to some other reflective optical systems.
NASA Astrophysics Data System (ADS)
Tichý, Vladimír; Hudec, René; Němcová, Šárka
2016-03-01
The algorithm presented is intended mainly for lobster eye optics. This type of optics (and some similar types) allows for a simplification of the classical ray-tracing procedure that requires great many rays to simulate. The method presented performs the simulation of a only few rays; therefore it is extremely effective. Moreover, to simplify the equations, a specific mathematical formalism is used. Only a few simple equations are used, therefore the program code can be simple as well. The paper also outlines how to apply the method to some other reflective optical systems.
NASA Astrophysics Data System (ADS)
Shein, Norman P.
A nonselective Rayleigh fading channel model using a time-variant complex multiplier z(t) is considered. Performing a Monte Carlo simulation of this channel requires samples of z(t) with appropriate correlation (fading power spectrum). For an important f-4 spectrum, there is a simple digital implementation that generates uniformly spaced samples. However, many communications systems have faded signals which appear only intermittently at the receiver. Nonuniformly spaced samples are better suited to a simulation of this situation. The author presents an algorithm for efficiently generating nonuniformly spaced correlated samples which have a specified f-4 power spectrum.
Comparison of optimization algorithms in intensity-modulated radiation therapy planning
NASA Astrophysics Data System (ADS)
Kendrick, Rachel
Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.
NASA Astrophysics Data System (ADS)
Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad
2016-01-01
In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.
Synthetic line-of-sight algorithms for hardware-in-the-loop simulations
NASA Astrophysics Data System (ADS)
Richard, Henri; Lowman, Alan; Ballard, Gary
2005-05-01
During the flight of guided submunitions, translation of the missile with respect to the designated aimpoint causes a rotation of the Line-of-Sight (LOS) in inertial space. Large transmit arrays or 5 axis CARCO tables are used to perform True LOS (TLOS) for in-band simulations. Both of these TLOS approaches have cost or fidelity issues for RF seekers. Typically RF Hardware-in-the-Loop (HWIL) simulations of these guided submunitions are mounted on a Three Axes Rotational Flight Simulator (TARFS), which is not capable of translation, and utilize a 2 to 3 seeker beam width transmit array. This necessitates using a Synthetic Line-of-Sight (SLOS) algorithm with the TARFS in order to maintain the proper line-of-sight orientation during all phases of flight which typically includes largely varying LOS motion. This paper presents a simple explanation depicting TLOS and SLOS (TARFS) geometry and the seamless boresight/target SLOS algorithm utilized in AMRDEC's RF4 facility for a test article flight profile. In conclusion this paper will summarize the current state of SLOS algorithms utilized at AMRDEC and challenges and possible solutions envisioned in the near future.
Blocking Moving Window algorithm: Conditioning multiple-point simulations to hydrogeological data
NASA Astrophysics Data System (ADS)
Alcolea, Andres; Renard, Philippe
2010-08-01
Connectivity constraints and measurements of state variables contain valuable information on aquifer architecture. Multiple-point (MP) geostatistics allow one to simulate aquifer architectures, presenting a predefined degree of global connectivity. In this context, connectivity data are often disregarded. The conditioning to state variables is usually carried out by minimizing a suitable objective function (i.e., solving an inverse problem). However, the discontinuous nature of lithofacies distributions and of the corresponding objective function discourages the use of traditional sensitivity-based inversion techniques. This work presents the Blocking Moving Window algorithm (BMW), aimed at overcoming these limitations by conditioning MP simulations to hydrogeological data such as connectivity and heads. The BMW evolves iteratively until convergence: (1) MP simulation of lithofacies from geological/geophysical data and connectivity constraints, where only a random portion of the domain is simulated at every iteration (i.e., the blocking moving window, whose size is user-defined); (2) population of hydraulic properties at the intrafacies; (3) simulation of state variables; and (4) acceptance or rejection of the MP simulation depending on the quality of the fit of measured state variables. The outcome is a stack of MP simulations that (1) resemble a prior geological model depicted by a training image, (2) honor lithological data and connectivity constraints, (3) correlate with geophysical data, and (4) fit available measurements of state variables well. We analyze the performance of the algorithm on a 2-D synthetic example. Results show that (1) the size of the blocking moving window controls the behavior of the BMW, (2) conditioning to state variable data enhances dramatically the initial simulation (which accounts for geological/geophysical data only), and (3) connectivity constraints speed up the convergence but do not enhance the stack if the number of iterations
First application of quantum annealing to IMRT beamlet intensity optimization
NASA Astrophysics Data System (ADS)
Nazareth, Daryl P.; Spaans, Jason D.
2015-05-01
Optimization methods are critical to radiation therapy. A new technology, quantum annealing (QA), employs novel hardware and software techniques to address various discrete optimization problems in many fields. We report on the first application of quantum annealing to the process of beamlet intensity optimization for IMRT. We apply recently-developed hardware which natively exploits quantum mechanical effects for improved optimization. The new algorithm, called QA, is most similar to simulated annealing, but relies on natural processes to directly minimize a system’s free energy. A simple quantum system is slowly evolved into a classical system representing the objective function. If the evolution is sufficiently slow, there are probabilistic guarantees that a global minimum will be located. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitations. The beamlet dose matrices were computed using CERR and an objective function was defined based on typical clinical constraints, including dose-volume objectives, which result in a complex non-convex search space. The objective function was discretized and the QA method was compared to two standard optimization methods, simulated annealing and Tabu search, run on a conventional computing cluster. Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the simulated annealing (SA) method. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu and 22.9 for the SA. The QA algorithm required 27-38% of the time required by the other two methods. In this first application of hardware-enabled QA to IMRT optimization, its performance is comparable to Tabu search, but less effective than the SA in terms of final objective function values. However, its speed was 3-4 times faster than the other two methods
First application of quantum annealing to IMRT beamlet intensity optimization.
Nazareth, Daryl P; Spaans, Jason D
2015-05-21
Optimization methods are critical to radiation therapy. A new technology, quantum annealing (QA), employs novel hardware and software techniques to address various discrete optimization problems in many fields. We report on the first application of quantum annealing to the process of beamlet intensity optimization for IMRT. We apply recently-developed hardware which natively exploits quantum mechanical effects for improved optimization. The new algorithm, called QA, is most similar to simulated annealing, but relies on natural processes to directly minimize a system's free energy. A simple quantum system is slowly evolved into a classical system representing the objective function. If the evolution is sufficiently slow, there are probabilistic guarantees that a global minimum will be located. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitations. The beamlet dose matrices were computed using CERR and an objective function was defined based on typical clinical constraints, including dose-volume objectives, which result in a complex non-convex search space. The objective function was discretized and the QA method was compared to two standard optimization methods, simulated annealing and Tabu search, run on a conventional computing cluster. Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the simulated annealing (SA) method. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu and 22.9 for the SA. The QA algorithm required 27-38% of the time required by the other two methods. In this first application of hardware-enabled QA to IMRT optimization, its performance is comparable to Tabu search, but less effective than the SA in terms of final objective function values. However, its speed was 3-4 times faster than the other two methods. This
NASA Technical Reports Server (NTRS)
Jain, A.; Man, G. K.
1993-01-01
This paper describes the Dynamics Algorithms for Real-Time Simulation (DARTS) real-time hardware-in-the-loop dynamics simulator for the National Aeronautics and Space Administration's Cassini spacecraft. The spacecraft model consists of a central flexible body with a number of articulated rigid-body appendages. The demanding performance requirements from the spacecraft control system require the use of a high fidelity simulator for control system design and testing. The DARTS algorithm provides a new algorithmic and hardware approach to the solution of this hardware-in-the-loop simulation problem. It is based upon the efficient spatial algebra dynamics for flexible multibody systems. A parallel and vectorized version of this algorithm is implemented on a low-cost, multiprocessor computer to meet the simulation timing requirements.
A fast algorithm for voxel-based deterministic simulation of X-ray imaging
NASA Astrophysics Data System (ADS)
Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee
2008-04-01
Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video
NASA Astrophysics Data System (ADS)
Haribara, Yoshitaka; Utsunomiya, Shoko; Yamamoto, Yoshihisa
An optical parametric oscillator network driven by a quantum measurement-feedback circuit, composed of optical homodyne detectors, analog-to-digital conversion devices and field programmable gate arrays (FPGA), is proposed and analysed as a scalable coherent Ising machine. The new scheme has an advantage that a large number of optical coupling paths, which is proportional to the square of a problem size in the worst case, can be replaced by a single quantum measurement-feedback circuit. There is additional advantage in the new scheme that a three body or higher order Ising interaction can be implemented in the FPGA digital circuit. Noise associated with the measurement-feedback process is governed by the standard quantum limit. Numerical simulation based on c-number coupled Langevin equations demonstrate a satisfying performance of the proposed Ising machine against the NP-hard MAX-CUT problems.
Tejero, R.; Bassolino-Klimas, D.; Bruccoleri, R. E.; Montelione, G. T.
1996-01-01
The new functionality of the program CONGEN (Bruccoleri RE, Karplus M, 1987, Biopolymers 26:137-168; Bassolino-Klimas D et al., 1996, Protein Sci 5:593-603) has been applied for energy refinement of two previously determined solution NMR structures, murine epidermal growth factor (mEGF) and human type-alpha transforming growth factor (hTGF alpha). A summary of considerations used in converting experimental NMR data into distance constraints for CONGEN is presented. A general protocol for simulated annealing with restrained molecular dynamics is applied to generate NMR solution structures using CONGEN together with real experimental NMR data. A total of 730 NMR-derived constraints for mEGF and 424 NMR-derived constraints for hTGF alpha were used in these energy-refinement calculations. Different weighting schemes and starting conformations were studied to check and/or improve the sampling of the low-energy conformational space that is consistent with all constraints. The results demonstrate that loosened (i.e., "relaxed") sets of the EGF and hTGF alpha internuclear distance constraints allow molecules to overcome local minima in the search for a global minimum with respect to both distance restraints and conformational energy. The resulting energy-refined structures of mEGF and hTGF alpha are compared with structures determined previously and with structures of homologous proteins determined by NMR and X-ray crystallography. PMID:8845748
An assessment of coupling algorithms for nuclear reactor core physics simulations
NASA Astrophysics Data System (ADS)
Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby
2016-04-01
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss-Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton-Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.
An adaptive algorithm for simulation of stochastic reaction-diffusion processes
Ferm, Lars Hellander, Andreas Loetstedt, Per
2010-01-20
We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.
Parallel Simulation Algorithms for the Three Dimensional Strong-Strong Beam-Beam Interaction
Kabel, A.C.; /SLAC
2008-03-17
The strong-strong beam-beam effect is one of the most important effects limiting the luminosity of ring colliders. Little is known about it analytically, so most studies utilize numeric simulations. The two-dimensional realm is readily accessible to workstation-class computers (cf.,e.g.,[1, 2]), while three dimensions, which add effects such as phase averaging and the hourglass effect, require vastly higher amounts of CPU time. Thus, parallelization of three-dimensional simulation techniques is imperative; in the following we discuss parallelization strategies and describe the algorithms used in our simulation code, which will reach almost linear scaling of performance vs. number of CPUs for typical setups.
An assessment of coupling algorithms for nuclear reactor core physics simulations
Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby
2016-04-01
Here we evaluate the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product was developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK andmore » Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Finally, both criticality (k-eigenvalue) and critical boron search problems are considered.« less
MODA: a new algorithm to compute optical depths in multidimensional hydrodynamic simulations
NASA Astrophysics Data System (ADS)
Perego, Albino; Gafton, Emanuel; Cabezón, Rubén; Rosswog, Stephan; Liebendörfer, Matthias
2014-08-01
Aims: We introduce the multidimensional optical depth algorithm (MODA) for the calculation of optical depths in approximate multidimensional radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Methods: Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any predefined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we use a tree structure that is otherwise used for searching neighbors and calculating gravity. Results: In a series of numerical experiments, we compare the MODA results with analytically known solutions. We also use snapshots from actual 3D simulations and compare the results of MODA with those obtained with other methods, such as the global and local ray-by-ray method. It turns out that MODA achieves excellent accuracy at a moderate computational cost. In appendix we also discuss implementation details and parallelization strategies.
Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.
Ferrauto, Tomassino; Parisi, Domenico; Di Stefano, Gabriele; Baldassarre, Gianluca
2013-01-01
Organisms that live in groups, from microbial symbionts to social insects and schooling fish, exhibit a number of highly efficient cooperative behaviors, often based on role taking and specialization. These behaviors are relevant not only for the biologist but also for the engineer interested in decentralized collective robotics. We address these phenomena by carrying out experiments with groups of two simulated robots controlled by neural networks whose connection weights are evolved by using genetic algorithms. These algorithms and controllers are well suited to autonomously find solutions for decentralized collective robotic tasks based on principles of self-organization. The article first presents a taxonomy of role-taking and specialization mechanisms related to evolved neural network controllers. Then it introduces two cooperation tasks, which can be accomplished by either role taking or specialization, and uses these tasks to compare four different genetic algorithms to evaluate their capacity to evolve a suitable behavioral strategy, which depends on the task demands. Interestingly, only one of the four algorithms, which appears to have more biological plausibility, is capable of evolving role taking or specialization when they are needed. The results are relevant for both collective robotics and biology, as they can provide useful hints on the different processes that can lead to the emergence of specialization in robots and organisms. PMID:23514239
Generalized SIMD algorithm for efficient EM-PIC simulations on modern CPUs
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo; Decyk, Viktor; Mori, Warren; Silva, Luis
2012-10-01
There are several relevant plasma physics scenarios where highly nonlinear and kinetic processes dominate. Further understanding of these scenarios is generally explored through relativistic particle-in-cell codes such as OSIRIS [1], but this algorithm is computationally intensive, and efficient use high end parallel HPC systems, exploring all levels of parallelism available, is required. In particular, most modern CPUs include a single-instruction-multiple-data (SIMD) vector unit that can significantly speed up the calculations. In this work we present a generalized PIC-SIMD algorithm that is shown to work efficiently with different CPU (AMD, Intel, IBM) and vector unit types (2-8 way, single/double). Details on the algorithm will be given, including the vectorization strategy and memory access. We will also present performance results for the various hardware variants analyzed, focusing on floating point efficiency. Finally, we will discuss the applicability of this type of algorithm for EM-PIC simulations on GPGPU architectures [2]. [4pt] [1] R. A. Fonseca et al., LNCS 2331, 342, (2002)[0pt] [2] V. K. Decyk, T. V. Singh; Comput. Phys. Commun. 182, 641-648 (2011)
Design and simulation of imaging algorithm for Fresnel telescopy imaging system
NASA Astrophysics Data System (ADS)
Lv, Xiao-yu; Liu, Li-ren; Yan, Ai-min; Sun, Jian-feng; Dai, En-wen; Li, Bing
2011-06-01
Fresnel telescopy (short for Fresnel telescopy full-aperture synthesized imaging ladar) is a new high resolution active laser imaging technique. This technique is a variant of Fourier telescopy and optical scanning holography, which uses Fresnel zone plates to scan target. Compare with synthetic aperture imaging ladar(SAIL), Fresnel telescopy avoids problem of time synchronization and space synchronization, which decreasing technical difficulty. In one-dimensional (1D) scanning operational mode for moving target, after time-to-space transformation, spatial distribution of sampling data is non-uniform because of the relative motion between target and scanning beam. However, as we use fast Fourier transform (FFT) in the following imaging algorithm of matched filtering, distribution of data should be regular and uniform. We use resampling interpolation to transform the data into two-dimensional (2D) uniform distribution, and accuracy of resampling interpolation process mainly affects the reconstruction results. Imaging algorithms with different resampling interpolation algorithms have been analysis and computer simulation are also given. We get good reconstruction results of the target, which proves that the designed imaging algorithm for Fresnel telescopy imaging system is effective. This work is found to have substantial practical value and offers significant benefit for high resolution imaging system of Fresnel telescopy laser imaging ladar.
NASA Technical Reports Server (NTRS)
Merrill, W. C.; Delaat, J. C.
1986-01-01
An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.
NASA Technical Reports Server (NTRS)
Merrill, W. C.; Delaat, J. C.
1986-01-01
An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.
A novel simulation algorithm on ultrasonic image based on triangular planar transducers
NASA Astrophysics Data System (ADS)
Li, Yaqin; Wang, Xuan; Li, Shigao; Zhang, Cong; Sun, Kaiqiong
2015-12-01
Calculation of ultrasonic field based on medical transducers is often done by applying acoustics and using the Tupholestetpanishen method of calculation. The calculation is based on spatial impulse response; the spatial impulse response has only been determined analytical for a few geometries and using apodization over the transducer surface generally make its impossible to find the response analytically. A popular approach to find the general field is thus to split the aperture into small rectangles, and then sum the weighted response from each of these. The problem with triangular is their poor fit apertures which do not have straight edges, such as circular and oval shapes. In order to solve the problem, a novel algorithm based on triangular be proposed in the paper, the simulation of ultrasonic field based on the algorithm can be improved obviously.
NASA Astrophysics Data System (ADS)
Zhao, Tao; Hwang, Feng-Nan; Cai, Xiao-Chuan
2016-07-01
We consider a quintic polynomial eigenvalue problem arising from the finite volume discretization of a quantum dot simulation problem. The problem is solved by the Jacobi-Davidson (JD) algorithm. Our focus is on how to achieve the quadratic convergence of JD in a way that is not only efficient but also scalable when the number of processor cores is large. For this purpose, we develop a projected two-level Schwarz preconditioned JD algorithm that exploits multilevel domain decomposition techniques. The pyramidal quantum dot calculation is carefully studied to illustrate the efficiency of the proposed method. Numerical experiments confirm that the proposed method has a good scalability for problems with hundreds of millions of unknowns on a parallel computer with more than 10,000 processor cores.
New Algorithms for Computing the Time-to-Collision in Freeway Traffic Simulation Models
Hou, Jia; List, George F.; Guo, Xiucheng
2014-01-01
Ways to estimate the time-to-collision are explored. In the context of traffic simulation models, classical lane-based notions of vehicle location are relaxed and new, fast, and efficient algorithms are examined. With trajectory conflicts being the main focus, computational procedures are explored which use a two-dimensional coordinate system to track the vehicle trajectories and assess conflicts. Vector-based kinematic variables are used to support the calculations. Algorithms based on boxes, circles, and ellipses are considered. Their performance is evaluated in the context of computational complexity and solution time. Results from these analyses suggest promise for effective and efficient analyses. A combined computation process is found to be very effective. PMID:25628650
Performance of quantum annealing in solving optimization problems: A review
NASA Astrophysics Data System (ADS)
Suzuki, S.
2015-02-01
Quantum annealing is one of the optimization method for generic optimization problems. It uses quantum mechanics and is implemented by a quantum computer ideally. At the earlier stage, several numerical experiments using conventional computers have provided results showing that quantum annealing produces an answer faster than simulated annealing, a classical counterpart of quantum annealing. Later, theoretical and numerical studies have shown that there are drawbacks in quantum annealing. The power of quantum annealing is still an open problem. What makes quantum annealing a hot topic now is that a quantum computer based on quantum annealing is manufactured and commercialized by a Canadian company named D-Wave Systems. In the present article, we review the study of quantum annealing, focusing mainly on its power.
NASA Astrophysics Data System (ADS)
Gonzalez-Mancera, Andres; Gonzalez Cardenas, Diego
2014-11-01
Flow in the microcirculation is highly dependent on the mechanical properties of the cells suspended in the plasma. Red blood cells have to deform in order to pass through the smaller sections in the microcirculation. Certain deceases change the mechanical properties of red blood cells affecting its ability to deform and the rheological behaviour of blood. We developed a hybrid algorithm based on the Lattice-Boltzmann and Finite Element methods to simulate blood flow in small capillaries. Plasma was modeled as a Newtonian fluid and the red blood cells' membrane as a hyperelastic solid. The fluid-structure interaction was handled using the immersed boundary method. We simulated the flow of plasma with suspended red blood cells through cylindrical capillaries and measured the pressure drop as a function of the membrane's rigidity. We also simulated the flow through capillaries with a restriction and identify critical properties for which the suspended particles are unable to flow. The algorithm output was verified by reproducing certain common features of flow int he microcirculation such as the Fahraeus-Lindqvist effect.
A parallel algorithm for transient solid dynamics simulations with contact detection
Attaway, S.; Hendrickson, B.; Plimpton, S.; Gardner, D.; Vaughan, C.; Heinstein, M.; Peery, J.
1996-06-01
Solid dynamics simulations with Lagrangian finite elements are used to model a wide variety of problems, such as the calculation of impact damage to shipping containers for nuclear waste and the analysis of vehicular crashes. Using parallel computers for these simulations has been hindered by the difficulty of searching efficiently for material surface contacts in parallel. A new parallel algorithm for calculation of arbitrary material contacts in finite element simulations has been developed and implemented in the PRONTO3D transient solid dynamics code. This paper will explore some of the issues involved in developing efficient, portable, parallel finite element models for nonlinear transient solid dynamics simulations. The contact-detection problem poses interesting challenges for efficient implementation of a solid dynamics simulation on a parallel computer. The finite element mesh is typically partitioned so that each processor owns a localized region of the finite element mesh. This mesh partitioning is optimal for the finite element portion of the calculation since each processor must communicate only with the few connected neighboring processors that share boundaries with the decomposed mesh. However, contacts can occur between surfaces that may be owned by any two arbitrary processors. Hence, a global search across all processors is required at every time step to search for these contacts. Load-imbalance can become a problem since the finite element decomposition divides the volumetric mesh evenly across processors but typically leaves the surface elements unevenly distributed. In practice, these complications have been limiting factors in the performance and scalability of transient solid dynamics on massively parallel computers. In this paper the authors present a new parallel algorithm for contact detection that overcomes many of these limitations.
Hendrickson, B.; Plimpton, S.; Attaway, S.; Swegle, J.
1996-09-01
Transient dynamics simulations are commonly used to model phenomena such as car crashes, underwater explosions, and the response of shipping containers to high-speed impacts. Physical objects in such a simulation are typically represented by Lagrangian meshes because the meshes can move and deform with the objects as they undergo stress. Fluids (gasoline, water) or fluid-like materials (earth) in the simulation can be modeled using the techniques of smoothed particle hydrodynamics. Implementing a hybrid mesh/particle model on a massively parallel computer poses several difficult challenges. One challenge is to simultaneously parallelize and load-balance both the mesh and particle portions of the computation. A second challenge is to efficiently detect the contacts that occur within the deforming mesh and between mesh elements and particles as the simulation proceeds. These contacts impart forces to the mesh elements and particles which must be computed at each timestep to accurately capture the physics of interest. In this paper we describe new parallel algorithms for smoothed particle hydrodynamics and contact detection which turn out to have several key features in common. Additionally, we describe how to join the new algorithms with traditional parallel finite element techniques to create an integrated particle/mesh transient dynamics simulation. Our approach to this problem differs from previous work in that we use three different parallel decompositions, a static one for the finite element analysis and dynamic ones for particles and for contact detection. We have implemented our ideas in a parallel version of the transient dynamics code PRONTO-3D and present results for the code running on a large Intel Paragon.
NASA Astrophysics Data System (ADS)
Bulyha, Alena; Heitzinger, Clemens
2011-04-01
In this work, a Monte-Carlo algorithm in the constant-voltage ensemble for the calculation of 3d charge concentrations at charged surfaces functionalized with biomolecules is presented. The motivation for this work is the theoretical understanding of biofunctionalized surfaces in nanowire field-effect biosensors (BioFETs). This work provides the simulation capability for the boundary layer that is crucial in the detection mechanism of these sensors; slight changes in the charge concentration in the boundary layer upon binding of analyte molecules modulate the conductance of nanowire transducers. The simulation of biofunctionalized surfaces poses special requirements on the Monte-Carlo simulations and these are addressed by the algorithm. The constant-voltage ensemble enables us to include the right boundary conditions; the dna strands can be rotated with respect to the surface; and several molecules can be placed in a single simulation box to achieve good statistics in the case of low ionic concentrations relevant in experiments. Simulation results are presented for the leading example of surfaces functionalized with pna and with single- and double-stranded dna in a sodium-chloride electrolyte. These quantitative results make it possible to quantify the screening of the biomolecule charge due to the counter-ions around the biomolecules and the electrical double layer. The resulting concentration profiles show a three-layer structure and non-trivial interactions between the electric double layer and the counter-ions. The numerical results are also important as a reference for the development of simpler screening models.
Resilient algorithms for reconstructing and simulating gappy flow fields in CFD
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2015-10-01
It is anticipated that in future generations of massively parallel computer systems a significant portion of processors may suffer from hardware or software faults rendering large-scale computations useless. In this work we address this problem from the algorithmic side, proposing resilient algorithms that can recover from such faults irrespective of their fault origin. In particular, we set the foundations of a new class of algorithms that will combine numerical approximations with machine learning methods. To this end, we consider three types of fault scenarios: (1) a gappy region but with no previous gaps and no contamination of surrounding simulation data, (2) a space-time gappy region but with full spatiotemporal information and no contamination, and (3) previous gaps with contamination of surrounding data. To recover from such faults we employ different reconstruction and simulation methods, namely the projective integration, the co-Kriging interpolation, and the resimulation method. In order to compare the effectiveness of these methods for the different processor faults and to quantify the error propagation in each case, we perform simulations of two benchmark flows, flow in a cavity and flow past a circular cylinder. In general, the projective integration seems to be the most effective method when the time gaps are small, and the resimulation method is the best when the time gaps are big while the co-Kriging method is independent of time gaps. Furthermore, the projective integration method and the co-Kriging method are found to be good estimation methods for the initial and boundary conditions of the resimulation method in scenario (3).
Fokker-Planck-DSMC algorithm for simulations of rarefied gas flows
NASA Astrophysics Data System (ADS)
Gorji, M. Hossein; Jenny, Patrick
2015-04-01
A Fokker-Planck based particle Monte Carlo algorithm was devised recently for simulations of rarefied gas flows by the authors [1-3]. The main motivation behind the Fokker-Planck (FP) model is computational efficiency, which could be gained due to the fact that the resulting stochastic processes are continuous in velocity space. This property of the model leads to simulations where the computational cost becomes independent of the Knudsen number (Kn) [3]. However, the Fokker-Planck model which can be seen as a diffusion approximation of the Boltzmann equation, becomes less accurate as Kn increases. In this study we propose a hybrid Fokker-Planck-Direct Simulation Monte Carlo (FP-DSMC) solution method, which is applicable for the whole range of Kn. The objective of this algorithm is to retain the efficiency of the FP scheme at low Kn (Kn ≪ 1) and to employ conventional DSMC at high Kn (Kn ≫ 1). Since the computational particles employed by the FP model represent the same data as in DSMC, the coupling between the two methods is straightforward. The new ingredient is a switching criterion which would ideally result in a hybrid scheme with the efficiency of the FP method and the accuracy of DSMC for the whole Kn-range. Here, we adopt the number of collisions in a given computational cell and for a given time step size as a decision criterion in order to switch between the FP model and DSMC. For assessment of the hybrid algorithm, different test cases including flow impingement and flow expansion through a slit were studied. Both accuracy and efficiency of the model are shown to be excellent for the presented test cases.
Parallel simulations of Grover's algorithm for closest match search in neutron monitor data
NASA Astrophysics Data System (ADS)
Kussainov, Arman; White, Yelena
We are studying the parallel implementations of Grover's closest match search algorithm for neutron monitor data analysis. This includes data formatting, and matching quantum parameters to a conventional structure of a chosen programming language and selected experimental data type. We have employed several workload distribution models based on acquired data and search parameters. As a result of these simulations, we have an understanding of potential problems that may arise during configuration of real quantum computational devices and the way they could run tasks in parallel. The work was supported by the Science Committee of the Ministry of Science and Education of the Republic of Kazakhstan Grant #2532/GF3.
Ultrafast vectorized multispin coding algorithm for the Monte Carlo simulation of the 3D Ising model
NASA Astrophysics Data System (ADS)
Wansleben, Stephan
1987-02-01
A new Monte Carlo algorithm for the 3D Ising model and its implementation on a CDC CYBER 205 is presented. This approach is applicable to lattices with sizes between 3·3·3 and 192·192·192 with periodic boundary conditions, and is adjustable to various kinetic models. It simulates a canonical ensemble at given temperature generating a new random number for each spin flip. For the Metropolis transition probability the speed is 27 ns per updates on a two-pipe CDC Cyber 205 with 2 million words physical memory, i.e. 1.35 times the cycle time per update or 38 million updates per second.
AVR microcontroller simulator for software implemented hardware fault tolerance algorithms research
NASA Astrophysics Data System (ADS)
Piotrowski, Adam; Tarnowski, Szymon; Napieralski, Andrzej
2008-01-01
Reliability of new, advanced electronic systems becomes a serious problem especially in places like accelerators and synchrotrons, where sophisticated digital devices operate closely to radiation sources. One of the possible solutions to harden the microprocessor-based system is a strict programming approach known as the Software Implemented Hardware Fault Tolerance. Unfortunately, in real environments it is not possible to perform precise and accurate tests of the new algorithms due to hardware limitation. This paper highlights the AVR-family microcontroller simulator project equipped with an appropriate monitoring and the SEU injection systems.
Simulation of 3D MRI brain images for quantitative evaluation of image segmentation algorithms
NASA Astrophysics Data System (ADS)
Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Obladen, Thorsten; Sabri, Osama; Buell, Udalrich
2000-06-01
To model the true shape of MRI brain images, automatically classified T1-weighted 3D MRI images (gray matter, white matter, cerebrospinal fluid, scalp/bone and background) are utilized for simulation of grayscale data and imaging artifacts. For each class, Gaussian distribution of grayscale values is assumed, and mean and variance are computed from grayscale images. A random generator fills up the class images with Gauss-distributed grayscale values. Since grayscale values of neighboring voxels are not correlated, a Gaussian low-pass filtering is done, preserving class region borders. To simulate anatomical variability, a Gaussian distribution in space with user-defined mean and variance can be added at any user-defined position. Several imaging artifacts can be added: (1) to simulate partial volume effects, every voxel is averaged with neighboring voxels if they have a different class label; (2) a linear or quadratic bias field can be added with user-defined strength and orientation; (3) additional background noise can be added; and (4) artifacts left over after spoiling can be simulated by adding a band with increasing/decreasing grayscale values. With this method, realistic-looking simulated MRI images can be produced to test classification and segmentation algorithms regarding accuracy and robustness even in the presence of artifacts.
Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*
NASA Astrophysics Data System (ADS)
Xiang, LI
In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.
Quantum mechanical NMR simulation algorithm for protein-size spin systems
NASA Astrophysics Data System (ADS)
Edwards, Luke J.; Savostyanov, D. V.; Welderufael, Z. T.; Lee, Donghan; Kuprov, Ilya
2014-06-01
Nuclear magnetic resonance spectroscopy is one of the few remaining areas of physical chemistry for which polynomially scaling quantum mechanical simulation methods have not so far been available. In this communication we adapt the restricted state space approximation to protein NMR spectroscopy and illustrate its performance by simulating common 2D and 3D liquid state NMR experiments (including accurate description of relaxation processes using Bloch-Redfield-Wangsness theory) on isotopically enriched human ubiquitin - a protein containing over a thousand nuclear spins forming an irregular polycyclic three-dimensional coupling lattice. The algorithm uses careful tailoring of the density operator space to only include nuclear spin states that are populated to a significant extent. The reduced state space is generated by analysing spin connectivity and decoherence properties: rapidly relaxing states as well as correlations between topologically remote spins are dropped from the basis set.
NASA Technical Reports Server (NTRS)
Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.
1998-01-01
We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal
A Multirate Variable-timestep Algorithm for N-body Solar System Simulations with Collisions
NASA Astrophysics Data System (ADS)
Sharp, P. W.; Newman, W. I.
2016-03-01
We present and analyze the performance of a new algorithm for performing accurate simulations of the solar system when collisions between massive bodies and test particles are permitted. The orbital motion of all bodies at all times is integrated using a high-order variable-timestep explicit Runge-Kutta Nyström (ERKN) method. The variation in the timestep ensures that the orbital motion of test particles on eccentric orbits or close to the Sun is calculated accurately. The test particles are divided into groups and each group is integrated using a different sequence of timesteps, giving a multirate algorithm. The ERKN method uses a high-order continuous approximation to the position and velocity when checking for collisions across a step. We give a summary of the extensive testing of our algorithm. In our largest simulation—that of the Sun, the planets Earth to Neptune and 100,000 test particles over 100 million years—the relative error in the energy after 100 million years was of the order of 10-11.
NASA Astrophysics Data System (ADS)
Roh, Min K.; Daigle, Bernie J.; Gillespie, Dan T.; Petzold, Linda R.
2011-12-01
In recent years there has been substantial growth in the development of algorithms for characterizing rare events in stochastic biochemical systems. Two such algorithms, the state-dependent weighted stochastic simulation algorithm (swSSA) and the doubly weighted SSA (dwSSA) are extensions of the weighted SSA (wSSA) by H. Kuwahara and I. Mura [J. Chem. Phys. 129, 165101 (2008)], 10.1063/1.2987701. The swSSA substantially reduces estimator variance by implementing system state-dependent importance sampling (IS) parameters, but lacks an automatic parameter identification strategy. In contrast, the dwSSA provides for the automatic determination of state-independent IS parameters, thus it is inefficient for systems whose states vary widely in time. We present a novel modification of the dwSSA—the state-dependent doubly weighted SSA (sdwSSA)—that combines the strengths of the swSSA and the dwSSA without inheriting their weaknesses. The sdwSSA automatically computes state-dependent IS parameters via the multilevel cross-entropy method. We apply the method to three examples: a reversible isomerization process, a yeast polarization model, and a lac operon model. Our results demonstrate that the sdwSSA offers substantial improvements over previous methods in terms of both accuracy and efficiency.
Joldes, Grand Roman; Wittek, Adam; Miller, Karol
2008-01-01
Real time computation of soft tissue deformation is important for the use of augmented reality devices and for providing haptic feedback during operation or surgeon training. This requires algorithms that are fast, accurate and can handle material nonlinearities and large deformations. A set of such algorithms is presented in this paper, starting with the finite element formulation and the integration scheme used and addressing common problems such as hourglass control and locking. The computation examples presented prove that by using these algorithms, real time computations become possible without sacrificing the accuracy of the results. For a brain model having more than 7000 degrees of freedom, we computed the reaction forces due to indentation with frequency of around 1000 Hz using a standard dual core PC. Similarly, we conducted simulation of brain shift using a model with more than 50 000 degrees of freedom in less than a minute. The speed benefits of our models results from combining the Total Lagrangian formulation with explicit time integration and low order finite elements. PMID:19152791
Exploring Scheduling Algorithms and Analysis Tools for the LSST Operations Simulations
NASA Astrophysics Data System (ADS)
Petry, Catherine E.; Miller, M.; Cook, K. H.; Ridgway, S.; Chandrasekharan, S.; Jones, R. L.; Krughoff, K. S.; Ivezic, Z.; Krabbendam, V.
2012-01-01
The LSST Operations Simulator models the telescope's design-specific opto-mechanical system performance and site-specific conditions to simulate how observations may be obtained during a 10-year survey. We have found that a remarkable range of science programs are compatible with a single feasible cadence. The current version, OpSim v2.5, incorporates detailed models of the telescope and dome, the camera, weather and a more realistic model for scheduled and unscheduled downtime, as well as a scheduling strategy based on ranking requests for observations from a small number of observing modes attempting to optimize the key science objectives. Each observing mode is driven by a specific algorithm which ranks field-filter combinations of target fields to observe next. The output of the simulator is a detailed record of the activity of the telescope - such as position on the sky, slew activities, weather and various types of downtime - stored in a mySQL database. Sophisticated tools are required to mine this database in order to assess the degree of success of any simulated survey in some detail. An analysis pipeline has been created (SSTAR) which generates a standard report describing the basic characteristics of a simulated survey; a new analysis framework is being designed to allow for the inter-comparison of one or more simulated surveys and to perform more complex analyses in a pipeline fashion. Proprietary software is being used to interactively explore the database and to prototype reports for the new analysis pipeline, and we are working with the ASCOT team (http://ascot.astro.washington.edu) to determine the feasibility of creating our own interactive tools. The next phase of simulator development is being planned to include look-ahead to continue investigating the trade-offs of addressing multiple science goals within a single LSST survey.
Kim, Daniel H.; Wang-Chesebro, Alice; Weinberg, Vivian; Pouliot, Jean; Chen, Lee-May; Speight, Joycelyn; Littell, Ramey; Hsu, I.-Chow
2009-12-01
Purpose: We present clinical outcomes of image-guided brachytherapy using inverse planning simulated annealing (IPSA) planned high-dose rate (HDR) brachytherapy boost for locoregionally advanced cervical cancer. Methods and Materials: From February 2004 through December 2006, 51 patients were treated at the University of California, San Francisco with HDR brachytherapy boost as part of definitive radiation for International Federation of Gynecology and Obstetrics Stage IB1 to Stage IVA cervical cancer. Of the patients, 46 received concurrent chemotherapy, 43 with cisplatin alone and 3 with cisplatin/5-fluorouracil. All patients had IPSA-planned HDR brachytherapy boost after whole-pelvis external radiation to a total tumor dose of 85 Gy or greater (for alpha/beta = 10). Toxicities are reported according to National Cancer Institute CTCAE v3.0 (Common Terminology Criteria for Adverse Events version 3.0) guidelines. Results: At a median follow-up of 24.3 months, there were no toxicities of Grade 4 or greater and the frequencies of Grade 3 acute and late toxicities were 4% and 2%, respectively. The proportion of patients having Grade 1 or 2 gastrointestinal and genitourinary acute toxicities was 48% and 52%, respectively. Low-grade late toxicities included Grade 1 or 2 vaginal, gastrointestinal, and hormonal toxicities in 31%, 18%, and 4% of patients, respectively. During the follow-up period, local recurrence developed in 2 patients, regional recurrence developed in 2, and new distant metastases developed in 15. The rates of locoregional control of disease and overall survival at 24 months were 91% and 86%, respectively. Conclusions: Definitive radiation by use of inverse planned HDR brachytherapy boost for locoregionally advanced cervical cancer is well tolerated and achieves excellent local control of disease.
Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David
2012-04-11
A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling
Becker, Kathrin; Stauber, Martin; Schwarz, Frank; Beißbarth, Tim
2015-09-01
We propose a novel 3D-2D registration approach for micro-computed tomography (μCT) and histology (HI), constructed for dental implant biopsies, that finds the position and normal vector of the oblique slice from μCT that corresponds to HI. During image pre-processing, the implants and the bone tissue are segmented using a combination of thresholding, morphological filters and component labeling. After this, chamfer matching is employed to register the implant edges and fine registration of the bone tissues is achieved using simulated annealing. The method was tested on n=10 biopsies, obtained at 20 weeks after non-submerged healing in the canine mandible. The specimens were scanned with μCT 100 and processed for hard tissue sectioning. After registration, we assessed the agreement of bone to implant contact (BIC) using automated and manual measurements. Statistical analysis was conducted to test the agreement of the BIC measurements in the registered samples. Registration was successful for all specimens and agreement of the respective binary images was high (median: 0.90, 1.-3. Qu.: 0.89-0.91). Direct comparison of BIC yielded that automated (median 0.82, 1.-3. Qu.: 0.75-0.85) and manual (median 0.61, 1.-3. Qu.: 0.52-0.67) measures from μCT were significant positively correlated with HI (median 0.65, 1.-3. Qu.: 0.59-0.72) between μCT and HI groups (manual: R(2)=0.87, automated: R(2)=0.75, p<0.001). The results show that this method yields promising results and that μCT may become a valid alternative to assess osseointegration in three dimensions. PMID:26026659
Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik
2015-06-01
Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus. PMID:26575558
Developing a Moving-Solid Algorithm for Simulating Tsunamis Induced by Rock Sliding
NASA Astrophysics Data System (ADS)
Chuang, M.; Wu, T.; Huang, C.; Wang, C.; Chu, C.; Chen, M.
2012-12-01
The landslide generated tsunami is one of the most devastating nature hazards. However, the involvement of the moving obstacle and dynamic free-surface movement makes the numerical simulation a difficult task. To describe the fluid motion, we use modified two-step projection method to decouple the velocity and pressure fields with 3D LES turbulent model. The free-surface movement is tracked by volume of fluid (VOF) method (Wu, 2004). To describe the effect from the moving obstacle on the fluid, a newly developed moving-solid algorithm (MSA) is developed. We combine the ideas from immersed boundary method (IBM) and partial-cell treatment (PCT) for specifying the contacting speed on the solid face and for presenting the obstacle blocking effect, respectively. By using the concept of IBM, the cell-center and cell-face velocities can be specified arbitrarily. And because we move the solid obstacle on a fixed grid, the boundary of the solid seldom coincides with the cell faces, which makes it inappropriate to assign the solid boundary velocity to the cell faces. To overcome this problem, the PCT is adopted. Using this algorithm, the solid surface is conceptually coincided with the cell faces, and the cell face velocity is able to be specified as the obstacle velocity. The advantage of using this algorithm is obtaining the stable pressure field which is extremely important for coupling with a force-balancing model which describes the solid motion. This model is therefore able to simulate incompressible high-speed fluid motion. In order to describe the solid motion, the DEM (Discrete Element Method) is adopted. The new-time solid movement can be predicted and divided into translation and rotation based on the Newton's equations and Euler's equations respectively. The detail of the moving-solid algorithm is presented in this paper. This model is then applied to studying the rock-slide generated tsunami. The results are validated with the laboratory data (Liu and Wu, 2005
NASA Astrophysics Data System (ADS)
Weston, Joseph; Waintal, Xavier
2016-04-01
We report on a "source-sink" algorithm which allows one to calculate time-resolved physical quantities from a general nanoelectronic quantum system (described by an arbitrary time-dependent quadratic Hamiltonian) connected to infinite electrodes. Although mathematically equivalent to the nonequilibrium Green's function formalism, the approach is based on the scattering wave functions of the system. It amounts to solving a set of generalized Schrödinger equations that include an additional "source" term (coming from the time-dependent perturbation) and an absorbing "sink" term (the electrodes). The algorithm execution time scales linearly with both system size and simulation time, allowing one to simulate large systems (currently around 106 degrees of freedom) and/or large times (currently around 105 times the smallest time scale of the system). As an application we calculate the current-voltage characteristics of a Josephson junction for both short and long junctions, and recover the multiple Andreev reflection physics. We also discuss two intrinsically time-dependent situations: the relaxation time of a Josephson junction after a quench of the voltage bias, and the propagation of voltage pulses through a Josephson junction. In the case of a ballistic, long Josephson junction, we predict that a fast voltage pulse creates an oscillatory current whose frequency is controlled by the Thouless energy of the normal part. A similar effect is found for short junctions; a voltage pulse produces an oscillating current which, in the absence of electromagnetic environment, does not relax.
An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models
Wise, S.M.; Lowengrub, J.S.; Cristini, V.
2010-01-01
In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663
An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models.
Wise, S M; Lowengrub, J S; Cristini, V
2011-01-01
In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663
Godfrey, Brendan B.; Vay, Jean-Luc
2013-09-01
Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.
Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak
2010-01-01
Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions
An efficient algorithm for the stochastic simulation of the hybridization of DNA to microarrays
2009-01-01
Background Although oligonucleotide microarray technology is ubiquitous in genomic research, reproducibility and standardization of expression measurements still concern many researchers. Cross-hybridization between microarray probes and non-target ssDNA has been implicated as a primary factor in sensitivity and selectivity loss. Since hybridization is a chemical process, it may be modeled at a population-level using a combination of material balance equations and thermodynamics. However, the hybridization reaction network may be exceptionally large for commercial arrays, which often possess at least one reporter per transcript. Quantification of the kinetics and equilibrium of exceptionally large chemical systems of this type is numerically infeasible with customary approaches. Results In this paper, we present a robust and computationally efficient algorithm for the simulation of hybridization processes underlying microarray assays. Our method may be utilized to identify the extent to which nucleic acid targets (e.g. cDNA) will cross-hybridize with probes, and by extension, characterize probe robustnessusing the information specified by MAGE-TAB. Using this algorithm, we characterize cross-hybridization in a modified commercial microarray assay. Conclusions By integrating stochastic simulation with thermodynamic prediction tools for DNA hybridization, one may robustly and rapidly characterize of the selectivity of a proposed microarray design at the probe and "system" levels. Our code is available at http://www.laurenzi.net. PMID:20003312
NASA Technical Reports Server (NTRS)
Richardson, Albert O.
1997-01-01
This research has investigated the use of fuzzy logic, via the Matlab Fuzzy Logic Tool Box, to design optimized controller systems. The engineering system for which the controller was designed and simulate was the container crane. The fuzzy logic algorithm that was investigated was the 'predictive control' algorithm. The plant dynamics of the container crane is representative of many important systems including robotic arm movements. The container crane that was investigated had a trolley motor and hoist motor. Total distance to be traveled by the trolley was 15 meters. The obstruction height was 5 meters. Crane height was 17.8 meters. Trolley mass was 7500 kilograms. Load mass was 6450 kilograms. Maximum trolley and rope velocities were 1.25 meters per sec. and 0.3 meters per sec., respectively. The fuzzy logic approach allowed the inclusion, in the controller model, of performance indices that are more effectively defined in linguistic terms. These include 'safety' and 'cargo swaying'. Two fuzzy inference systems were implemented using the Matlab simulation package, namely the Mamdani system (which relates fuzzy input variables to fuzzy output variables), and the Sugeno system (which relates fuzzy input variables to crisp output variable). It is found that the Sugeno FIS is better suited to including aspects of those plant dynamics whose mathematical relationships can be determined.
Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII
McKinney, Gregg W
2012-07-17
Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.
A novel parallel-rotation algorithm for atomistic Monte Carlo simulation of dense polymer systems
NASA Astrophysics Data System (ADS)
Santos, S.; Suter, U. W.; Müller, M.; Nievergelt, J.
2001-06-01
We develop and test a new elementary Monte Carlo move for use in the off-lattice simulation of polymer systems. This novel Parallel-Rotation algorithm (ParRot) permits moving very efficiently torsion angles that are deeply inside long chains in melts. The parallel-rotation move is extremely simple and is also demonstrated to be computationally efficient and appropriate for Monte Carlo simulation. The ParRot move does not affect the orientation of those parts of the chain outside the moving unit. The move consists of a concerted rotation around four adjacent skeletal bonds. No assumption is made concerning the backbone geometry other than that bond lengths and bond angles are held constant during the elementary move. Properly weighted sampling techniques are needed for ensuring detailed balance because the new move involves a correlated change in four degrees of freedom along the chain backbone. The ParRot move is supplemented with the classical Metropolis Monte Carlo, the Continuum-Configurational-Bias, and Reptation techniques in an isothermal-isobaric Monte Carlo simulation of melts of short and long chains. Comparisons are made with the capabilities of other Monte Carlo techniques to move the torsion angles in the middle of the chains. We demonstrate that ParRot constitutes a highly promising Monte Carlo move for the treatment of long polymer chains in the off-lattice simulation of realistic models of dense polymer systems.
Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya
2014-05-14
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10(6)-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of
Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change
NASA Astrophysics Data System (ADS)
Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel
2014-05-01
Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments
NASA Astrophysics Data System (ADS)
Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya
2014-05-01
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 106-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques
NASA Astrophysics Data System (ADS)
Jiang, Tianzi; Cui, Qinghua; Shi, Guihua; Ma, Songde
2003-08-01
In this paper, a novel hybrid algorithm combining genetic algorithms and tabu search is presented. In the proposed hybrid algorithm, the idea of tabu search is applied to the crossover operator. We demonstrate that the hybrid algorithm can be applied successfully to the protein folding problem based on a hydrophobic-hydrophilic lattice model. The results show that in all cases the hybrid algorithm works better than a genetic algorithm alone. A comparison with other methods is also made.
NASA Astrophysics Data System (ADS)
Tang, Yu-Hang; Karniadakis, George; Crunch Team
2014-03-01
We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to illustrate the practicality of our code in real-world applications. This work was supported by the new Department of Energy Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4). Simulations were carried out at the Oak Ridge Leadership Computing Facility through the INCITE program under project BIP017.
Carvajal, M A; García-Pareja, S; Guirado, D; Vilches, M; Anguiano, M; Palma, A J; Lallena, A M
2009-10-21
In this work we have developed a simulation tool, based on the PENELOPE code, to study the response of MOSFET devices to irradiation with high-energy photons. The energy deposited in the extremely thin silicon dioxide layer has been calculated. To reduce the statistical uncertainties, an ant colony algorithm has been implemented to drive the application of splitting and Russian roulette as variance reduction techniques. In this way, the uncertainty has been reduced by a factor of approximately 5, while the efficiency is increased by a factor of above 20. As an application, we have studied the dependence of the response of the pMOS transistor 3N163, used as a dosimeter, with the incidence angle of the radiation for three common photons sources used in radiotherapy: a (60)Co Theratron-780 and the 6 and 18 MV beams produced by a Mevatron KDS LINAC. Experimental and simulated results have been obtained for gantry angles of 0 degrees, 15 degrees, 30 degrees, 45 degrees, 60 degrees and 75 degrees. The agreement obtained has permitted validation of the simulation tool. We have studied how to reduce the angular dependence of the MOSFET response by using an additional encapsulation made of brass in the case of the two LINAC qualities considered. PMID:19794247
Karafasoulis, K.; Zachariadou, K.; Seferlis, S.; Kaissas, I.; Potiriadis, C.; Lambropoulos, C.; Loukas, D.
2011-12-13
Simulation studies are presented regarding the performance of algorithms that localize point-like radioactive sources detected by a position sensitive portable radiation instrument (COCAE). The source direction is estimated by using the List Mode Maximum Likelihood Expectation Maximization (LM-ML-EM) imaging algorithm. Furthermore, the source-to-detector distance is evaluated by three different algorithms based on the photo-peak count information of each detecting layer, the quality of the reconstructed source image, and the triangulation method. These algorithms have been tested on a large number of simulated photons over a wide energy range (from 200 keV to 2 MeV) emitted by point-like radioactive sources located at different orientations and source-to-detector distances.
Jackson, Jennifer N; Hass, Chris J; Fregly, Benjamin J
2015-11-01
Patient-specific gait optimizations capable of predicting post-treatment changes in joint motions and loads could improve treatment design for gait-related disorders. To maximize potential clinical utility, such optimizations should utilize full-body three-dimensional patient-specific musculoskeletal models, generate dynamically consistent gait motions that reproduce pretreatment marker measurements closely, and achieve accurate foot motion tracking to permit deformable foot-ground contact modeling. This study enhances an existing residual elimination algorithm (REA) Remy, C. D., and Thelen, D. G., 2009, “Optimal Estimation of Dynamically Consistent Kinematics and Kinetics for Forward Dynamic Simulation of Gait,” ASME J. Biomech. Eng., 131(3), p. 031005) to achieve all three requirements within a single gait optimization framework. We investigated four primary enhancements to the original REA: (1) manual modification of tracked marker weights, (2) automatic modification of tracked joint acceleration curves, (3) automatic modification of algorithm feedback gains, and (4) automatic calibration of model joint and inertial parameter values. We evaluated the enhanced REA using a full-body three-dimensional dynamic skeletal model and movement data collected from a subject who performed four distinct gait patterns: walking, marching, running, and bounding. When all four enhancements were implemented together, the enhanced REA achieved dynamic consistency with lower marker tracking errors for all segments, especially the feet (mean root-mean-square (RMS) errors of 3.1 versus 18.4 mm), compared to the original REA. When the enhancements were implemented separately and in combinations, the most important one was automatic modification of tracked joint acceleration curves, while the least important enhancement was automatic modification of algorithm feedback gains. The enhanced REA provides a framework for future gait optimization studies that seek to predict subject
NASA Astrophysics Data System (ADS)
Schafer, Sebastian; Singh, Vikas; Hoffmann, Kenneth R.; Noël, Peter B.; Xu, Jinhui
2007-03-01
Endovascular interventional procedures are being used more frequently in cardiovascular surgery. Unfortunately, procedural failure, e.g., vessel dissection, may occur and is often related to improper guidewire and/or device selection. To support the surgeon's decision process and because of the importance of the guidewire in positioning devices, we propose a method to determine the guidewire path prior to insertion using a model of its elastic potential energy coupled with a representative graph construction. The 3D vessel centerline and sizes are determined for a specified vessel. Points in planes perpendicular to the vessel centerline are generated. For each pair of consecutive planes, a vector set is generated which joins all points in these planes. We construct a graph representing these vector sets as nodes. The nodes representing adjacent vector sets are joined by edges with weights calculated as a function of the angle between the corresponding vectors (nodes). The optimal path through this weighted directed graph is then determined using shortest path algorithms, such as topological sort based shortest path algorithm or Dijkstra's algorithm. Volumetric data of an internal carotid artery phantom (Ø 3.5mm) were acquired. Several independent guidewire (Ø 0.4mm) placements were performed, and the 3D paths were determined using rotational angiography. The average RMS distance between the actual and the average simulated guidewire path was 0.7mm; the computation time to determine the path was 3 seconds. The ability to predict the guidewire path inside vessels may facilitate calculation of vessel-branch access and force estimation on devices and the vessel wall.
Optimization through quantum annealing: theory and some applications
NASA Astrophysics Data System (ADS)
Battaglia, D. A.; Stella, L.
2006-08-01
Quantum annealing is a promising tool for solving optimization problems, similar in some ways to the traditional (classical) simulated annealing of Kirkpatrick et al. Simulated annealing takes advantage of thermal fluctuations in order to explore the optimization landscape of the problem at hand, whereas quantum annealing employs quantum fluctuations. Intriguingly, quantum annealing has been proved to be more effective than its classical counterpart in many applications. We illustrate the theory and the practical implementation of both classical and quantum annealing highlighting the crucial differences between these two methods by means of results recently obtained in experiments, in simple toy-models, and more challenging combinatorial optimization problems (namely, Random Ising model and Travelling Salesman Problem). The techniques used to implement quantum and classical annealing are either deterministic evolutions, for the simplest models, or Monte Carlo approaches, for harder optimization tasks. We discuss the pro and cons of these approaches and their possible connections to the landscape of the problem addressed.
Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm
NASA Technical Reports Server (NTRS)
Mitra, Sunanda; Pemmaraju, Surya
1992-01-01
Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.
A JFNK-based implicit moment algorithm for self-consistent, multi-scale, plasma simulation
NASA Astrophysics Data System (ADS)
Knoll, Dana; Taitano, William; Chacon, Luis
2010-11-01
Jacobian-Free-Newton-Krylov method (JFNK) is an advanced non-linear algorithm that allows solution to a coupled systems of non-linear equations [1]. In [2] we have put forward a JFNK-based implicit, consistent, time integration algorithm and demonstrated it's ability to efficiently step over electron time scales, while retaining electron kinetic effects on the ion time scale. Here we extend this work by investigating a JFNK- based implicit-moments approach for the purpose of consistent scale-bridging between the fluid description and kinetic description in order to resolve the transition region. Our preliminary results, based on a reformulated Poisson's equation (RPE) [3], allows solution to the Vlasov-Poisson system for varying grid resolutions. In the limit of local coarse grid size (grid spacing large compared to Debye length), the RPE represents an electric field based on the moment system, while in the limit of local grid spacing resolving the Debye length, the RPE represents an electric field based on the standard Poisson equation. The technique allows smooth transition between the two regimes, consistently, in one simulation. [1] D.A. Knoll and D.E. Keyes,J. Comput. Phys., vol. 193 (2004) [2] W.T. Taitano, Masters Thesis, Nuclear Engineering, University of Idaho (2010) [3] R. Belaouar, N.Crouseilles and P. Degond,J. Sci. Comput., vol. 41 (2009)
Improving Efficiency in SMD Simulations Through a Hybrid Differential Relaxation Algorithm.
Ramírez, Claudia L; Zeida, Ari; Jara, Gabriel E; Roitberg, Adrián E; Martí, Marcelo A
2014-10-14
The fundamental object for studying a (bio)chemical reaction obtained from simulations is the free energy profile, which can be directly related to experimentally determined properties. Although quite accurate hybrid quantum (DFT based)-classical methods are available, achieving statistically accurate and well converged results at a moderate computational cost is still an open challenge. Here, we present and thoroughly test a hybrid differential relaxation algorithm (HyDRA), which allows faster equilibration of the classical environment during the nonequilibrium steering of a (bio)chemical reaction. We show and discuss why (in the context of Jarzynski's Relationship) this method allows obtaining accurate free energy profiles with smaller number of independent trajectories and/or faster pulling speeds, thus reducing the overall computational cost. Moreover, due to the availability and straightforward implementation of the method, we expect that it will foster theoretical studies of key enzymatic processes. PMID:26588154
NASA Technical Reports Server (NTRS)
Kaushik, Dinesh K.; Baysal, Oktay
1997-01-01
Accurate computation of acoustic wave propagation may be more efficiently performed when their dispersion relations are considered. Consequently, computational algorithms which attempt to preserve these relations have been gaining popularity in recent years. In the present paper, the extensions to one such scheme are discussed. By solving the linearized, 2-D Euler and Navier-Stokes equations with such a method for the acoustic wave propagation, several issues were investigated. Among them were higher-order accuracy, choice of boundary conditions and differencing stencils, effects of viscosity, low-storage time integration, generalized curvilinear coordinates, periodic series, their reflections and interference patterns from a flat wall and scattering from a circular cylinder. The results were found to be promising en route to the aeroacoustic simulations of realistic engineering problems.
A TR-induced algorithm for hot spots elimination through CT-scan HIFU simulations
NASA Astrophysics Data System (ADS)
Leduc, Nicolas; Okita, Kohei; Sugiyama, Kazuyasu; Takagi, Shu; Matsumoto, Yoichiro
2011-09-01
Although nowadays widely spread for imaging and treatments uses, HIFU techniques are still limited by the distortion of the wavefront due to refraction and reflection on the inhomogeneous media inside the human body. CT-scan Time Reversal (TR) procedure has risen as a promising candidate for focus control. A finite difference time domain parallelized code is used to provide simulations of TR-enhanced propagation through elements of the human body and implement a simple algorithm to address the issue of grating lobes, i.e secondary peaks of pressure due to natural diffraction by phased arrays and enhanced by medium heterogeneity. Using an iterative, progressive process combining secondary sound sources and independent signal summation, the primary peak is strengthened while secondary peaks are increasingly obliterated. This method supports the feasibility of precise modification and enhancement of the pressure profile in the targeted area through Time Reversal based solutions.
Hybrid-PIC Algorithms for Simulation of Large-Scale Plasma Jet Accelerators
NASA Astrophysics Data System (ADS)
Thoma, Carsten; Welch, Dale
2009-11-01
Merging coaxial plasma jets are envisioned for use in magneto-inertial fusion schemes as the source of an imploding plasma liner. An experimental program at HyperV is considering the generation of large plasma jets (length scales on the order of centimeters) at high densities (10^16-10^17 cm-3) in long coaxial accelerators. We describe the Hybrid particle-in-cell (PIC) methods implemented in the code LSP for this parameter regime and present simulation results of the HyperV accelerator. A radiation transport algorithm has also been implemented into LSP so that the effect of radiation cooling on the jet mach number can be included self-consistently into the Hybrid PIC formalism.
Shin, Hyun-Ho; Yoon, Woong-Sup
2008-07-01
An Adaptive-Spatial Decomposition parallel algorithm was developed to increase computation efficiency for molecular dynamics simulations of nano-fluids. Injection of a liquid argon jet with a scale of 17.6 molecular diameters was investigated. A solid annular platinum injector was also solved simultaneously with the liquid injectant by adopting a solid modeling technique which incorporates phantom atoms. The viscous heat was naturally discharged through the solids so the liquid boiling problem was avoided with no separate use of temperature controlling methods. Parametric investigations of injection speed, wall temperature, and injector length were made. A sudden pressure drop at the orifice exit causes flash boiling of the liquid departing the nozzle exit with strong evaporation on the surface of the liquids, while rendering a slender jet. The elevation of the injection speed and the wall temperature causes an activation of the surface evaporation concurrent with reduction in the jet breakup length and the drop size. PMID:19051924
NASA Technical Reports Server (NTRS)
Longendorfer, B. A.
1976-01-01
The construction of an autonomous roving vehicle requires the development of complex data-acquisition and processing systems, which determine the path along which the vehicle travels. Thus, a vehicle must possess algorithms which can (1) reliably detect obstacles by processing sensor data, (2) maintain a constantly updated model of its surroundings, and (3) direct its immediate actions to further a long range plan. The first function consisted of obstacle recognition. Obstacles may be identified by the use of edge detection techniques. Therefore, the Kalman Filter was implemented as part of a large scale computer simulation of the Mars Rover. The second function consisted of modeling the environment. The obstacle must be reconstructed from its edges, and the vast amount of data must be organized in a readily retrievable form. Therefore, a Terrain Modeller was developed which assembled and maintained a rectangular grid map of the planet. The third function consisted of directing the vehicle's actions.
NASA Astrophysics Data System (ADS)
Sharma, Shashi Prakash
2012-05-01
Employing the very fast simulated annealing (VFSA) global optimization technique, a FORTRAN program is developed for the interpretation of one-dimensional direct current resistivity sounding data from various electrode arrays. The VFSA optimization depicts various good fitting solutions (models) after analyzing a large number of models within a predefined model space. Various models that yield reasonably well fitting responses with the observed response lie along a narrow elongated region of the model space. Therefore, instead of selecting the global model on the basis of the lowest misfit error, it is better to analyze histograms and probability density functions (PDFs) of such models for depicting the global model. In a multidimensional model space, the most appropriate region to select suitable models to compute the mean model is the one in which the PDF is larger in comparison to the other regions of the model space. Initially, accepted models with misfit errors less than the predefined threshold value are selected and lognormal PDFs for each model parameter are computed. Subsequently, mean model and uncertainties are computed using the models in which each model parameter has a PDF more than the defined threshold value (>68.2%). The mean model computed from such models is very close to the actual subsurface structure (global model). It is observed that the mean model computed using models with a PDF more than 95% for each model parameters yields the actual model. Moreover uncertainty computed using models with such a high PDF and lying in a small model space will be small and it will not be considered as the actual global uncertainty. Resistivity sounding (synthetic and field) data over different subsurface structures are optimized using the VFSA program developed in the present study. Optimization results reveal that the actual model always locates within the estimated uncertainty in the mean model. Since the approach requires much less computing time (a few
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard
2012-01-01
The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error
Dong, S.
2015-02-15
We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.
NASA Astrophysics Data System (ADS)
Gumerov, Nail A.; Karavaev, Alexey V.; Surjalal Sharma, A.; Shao, Xi; Papadopoulos, Konstantinos D.
2011-04-01
Efficient spectral and pseudospectral algorithms for simulation of linear and nonlinear 3D whistler waves in a cold electron plasma are developed. These algorithms are applied to the simulation of whistler waves generated by loop antennas and spheromak-like stationary waves of considerable amplitude. The algorithms are linearly stable and show good stability properties for computations of nonlinear waves over tens of thousands of time steps. Additional speedups by factors of 10-20 (comparing single core CPU and one GPU) are achieved by using graphics processors (GPUs), which enable efficient numerical simulation of the wave propagation on relatively high resolution meshes (tens of millions nodes) in personal computing environment. Comparisons of the numerical results with analytical solutions and experiments show good agreement. The limitations of the codes and the performance of the GPU computing are discussed.
Ando, Tadashi; Chow, Edmond; Skolnick, Jeffrey
2013-01-01
Hydrodynamic interactions exert a critical effect on the dynamics of macromolecules. As the concentration of macromolecules increases, by analogy to the behavior of semidilute polymer solutions or the flow in porous media, one might expect hydrodynamic screening to occur. Hydrodynamic screening would have implications both for the understanding of macromolecular dynamics as well as practical implications for the simulation of concentrated macromolecular solutions, e.g., in cells. Stokesian dynamics (SD) is one of the most accurate methods for simulating the motions of N particles suspended in a viscous fluid at low Reynolds number, in that it considers both far-field and near-field hydrodynamic interactions. This algorithm traditionally involves an O(N3) operation to compute Brownian forces at each time step, although asymptotically faster but more complex SD methods are now available. Motivated by the idea of hydrodynamic screening, the far-field part of the hydrodynamic matrix in SD may be approximated by a diagonal matrix, which is equivalent to assuming that long range hydrodynamic interactions are completely screened. This approximation allows sparse matrix methods to be used, which can reduce the apparent computational scaling to O(N). Previously there were several simulation studies using this approximation for monodisperse suspensions. Here, we employ newly designed preconditioned iterative methods for both the computation of Brownian forces and the solution of linear systems, and consider the validity of this approximation in polydisperse suspensions. We evaluate the accuracy of the diagonal approximation method using an intracellular-like suspension. The diffusivities of particles obtained with this approximation are close to those with the original method. However, this approximation underestimates intermolecular correlated motions, which is a trade-off between accuracy and computing efficiency. The new method makes it possible to perform large-scale and
Golfing with protons: using research grade simulation algorithms for online games
NASA Astrophysics Data System (ADS)
Harold, J.
2004-12-01
Scientists have long known the power of simulations. By modeling a system in a computer, researchers can experiment at will, developing an intuitive sense of how a system behaves. The rapid increase in the power of personal computers, combined with technologies such as Flash, Shockwave and Java, allow us to bring research simulations into the education world by creating exploratory environments for the public. This approach is illustrated by a project funded by a small grant from NSF's Informal Science Education program, through an opportunity that provides education supplements to existing research awards. Using techniques adapted from a magnetospheric research program, several Flash based interactives have been developed that allow web site visitors to explore the motion of particles in the Earth's magnetosphere. These pieces were folded into a larger Space Weather Center web project at the Space Science Institute (www.spaceweathercenter.org). Rather than presenting these interactives as plasma simulations per se, the research algorithms were used to create games such as "Magneto Mini Golf", where the balls are protons moving in combined electric and magnetic fields. The "holes" increase in complexity, beginning with no fields and progressing towards a simple model of Earth's magnetosphere. The emphasis of the activity is gameplay, but because it is at its core a plasma simulation, the user develops an intuitive sense of charged particle motion as they progress. Meanwhile, the pieces contain embedded assessments that are measurable through a database driven tracking system. Mining that database not only provides helpful usability information, but allows us to examine whether users are meeting the learning goals of the activities. We will discuss the development and evaluation results of the project, as well as the potential for these types of activities to shift the expectations of what a web site can and should provide educationally.
SEREN - a new SPH code for star and planet formation simulations. Algorithms and tests
NASA Astrophysics Data System (ADS)
Hubber, D. A.; Batty, C. P.; McLeod, A.; Whitworth, A. P.
2011-05-01
We present SEREN, a new hybrid Smoothed Particle Hydrodynamics and N-body code designed to simulate astrophysical processes such as star and planet formation. It is written in Fortran 95/2003 and has been parallelised using OpenMP. SEREN is designed in a flexible, modular style, thereby allowing a large number of options to be selected or disabled easily and without compromising performance. SEREN uses the conservative "grad-h" formulation of SPH, but can easily be configured to use traditional SPH or Godunov SPH. Thermal physics is treated either with a barotropic equation of state, or by solving the energy equation and modelling the transport of cooling radiation. A Barnes-Hut tree is used to obtain neighbour lists and compute gravitational accelerations efficiently, and an hierarchical time-stepping scheme is used to reduce the number of computations per timestep. Dense gravitationally bound objects are replaced by sink particles, to allow the simulation to be evolved longer, and to facilitate the identification of protostars and the compilation of stellar and binary properties. At the termination of a hydrodynamical simulation, SEREN has the option of switching to a pure N-body simulation, using a 4th-order Hermite integrator, and following the ballistic evolution of the sink particles (e.g. to determine the final binary statistics once a star cluster has relaxed). We describe in detail all the algorithms implemented in SEREN and we present the results of a suite of tests designed to demonstrate the fidelity of SEREN and its performance and scalability. Further information and additional tests of SEREN can be found at the web-page http://www.astro.group.shef.ac.uk/seren.
NASA Astrophysics Data System (ADS)
Ando, Tadashi; Chow, Edmond; Skolnick, Jeffrey
2013-09-01
Hydrodynamic interactions exert a critical effect on the dynamics of macromolecules. As the concentration of macromolecules increases, by analogy to the behavior of semidilute polymer solutions or the flow in porous media, one might expect hydrodynamic screening to occur. Hydrodynamic screening would have implications both for the understanding of macromolecular dynamics as well as practical implications for the simulation of concentrated macromolecular solutions, e.g., in cells. Stokesian dynamics (SD) is one of the most accurate methods for simulating the motions of N particles suspended in a viscous fluid at low Reynolds number, in that it considers both far-field and near-field hydrodynamic interactions. This algorithm traditionally involves an O(N3) operation to compute Brownian forces at each time step, although asymptotically faster but more complex SD methods are now available. Motivated by the idea of hydrodynamic screening, the far-field part of the hydrodynamic matrix in SD may be approximated by a diagonal matrix, which is equivalent to assuming that long range hydrodynamic interactions are completely screened. This approximation allows sparse matrix methods to be used, which can reduce the apparent computational scaling to O(N). Previously there were several simulation studies using this approximation for monodisperse suspensions. Here, we employ newly designed preconditioned iterative methods for both the computation of Brownian forces and the solution of linear systems, and consider the validity of this approximation in polydisperse suspensions. We evaluate the accuracy of the diagonal approximation method using an intracellular-like suspension. The diffusivities of particles obtained with this approximation are close to those with the original method. However, this approximation underestimates intermolecular correlated motions, which is a trade-off between accuracy and computing efficiency. The new method makes it possible to perform large-scale and
A novel coupling of noise reduction algorithms for particle flow simulations
NASA Astrophysics Data System (ADS)
Zimoń, M. J.; Reese, J. M.; Emerson, D. R.
2016-09-01
Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particle data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.
Distributed genetic algorithms for the floorplan design problem
NASA Technical Reports Server (NTRS)
Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.
1991-01-01
Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.
NASA Technical Reports Server (NTRS)
Clarke, R.; Lintereur, L.; Bahm, C.
2016-01-01
A desire for more complete documentation of the National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC), Edwards, California legacy code used in the core simulation has led to this e ort to fully document the oblate Earth six-degree-of-freedom equations of motion and integration algorithm. The authors of this report have taken much of the earlier work of the simulation engineering group and used it as a jumping-o point for this report. The largest addition this report makes is that each element of the equations of motion is traced back to first principles and at no point is the reader forced to take an equation on faith alone. There are no discoveries of previously unknown principles contained in this report; this report is a collection and presentation of textbook principles. The value of this report is that those textbook principles are herein documented in standard nomenclature that matches the form of the computer code DERIVC. Previous handwritten notes are much of the backbone of this work, however, in almost every area, derivations are explicitly shown to assure the reader that the equations which make up the oblate Earth version of the computer routine, DERIVC, are correct.
NASA Astrophysics Data System (ADS)
Santos, Eugene, Jr.; Zhao, Qunhua; Pratto, Felicia; Pearson, Adam R.; McQueary, Bruce; Breeden, Andy; Krause, Lee
2007-04-01
Nowadays, there is an increasing demand for the military to conduct operations that are beyond traditional warfare. In these operations, analyzing and understanding those who are involved in the situation, how they are going to behave, and why they behave in certain ways is critical for success. The challenge lies in that behavior does not simply follow universal/fixed doctrines; it is significantly influenced by soft factors (i.e. cultural factors, societal norms, etc.). In addition, there is rarely just one isolated enemy; the behaviors and responses of all groups in the region, and the dynamics of the interaction among them composes an important part of the whole picture. The Dynamic Adversarial Gaming Algorithm (DAGA) project aims to provide a wargaming environment for automation of simulating dynamics of geopolitical crisis and eventually be applied to military simulation and training domain, and/or commercial gaming arena. The focus of DAGA is on modeling communities of interest (COIs), where various individuals, groups, and organizations as well as their interactions are captured. The framework should provide a context for COIs to interact with each other and influence others' behaviors. These behaviors must incorporate soft factors by modeling cultural knowledge. We do so by representing cultural variables and their influence on behavior using probabilistic networks. In this paper, we describe our COI modeling, the development of cultural networks, the interaction architecture, and a prototype of DAGA.
Real-Time Simulation for Verification and Validation of Diagnostic and Prognostic Algorithms
NASA Technical Reports Server (NTRS)
Aguilar, Robet; Luu, Chuong; Santi, Louis M.; Sowers, T. Shane
2005-01-01
To verify that a health management system (HMS) performs as expected, a virtual system simulation capability, including interaction with the associated platform or vehicle, very likely will need to be developed. The rationale for developing this capability is discussed and includes the limited capability to seed faults into the actual target system due to the risk of potential damage to high value hardware. The capability envisioned would accurately reproduce the propagation of a fault or failure as observed by sensors located at strategic locations on and around the target system and would also accurately reproduce the control system and vehicle response. In this way, HMS operation can be exercised over a broad range of conditions to verify that it meets requirements for accurate, timely response to actual faults with adequate margin against false and missed detections. An overview is also presented of a real-time rocket propulsion health management system laboratory which is available for future rocket engine programs. The health management elements and approaches of this lab are directly applicable for future space systems. In this paper the various components are discussed and the general fault detection, diagnosis, isolation and the response (FDIR) concept is presented. Additionally, the complexities of V&V (Verification and Validation) for advanced algorithms and the simulation capabilities required to meet the changing state-of-the-art in HMS are discussed.
Circuit model of the ITER-like antenna for JET and simulation of its control algorithms
Durodié, Frédéric Křivská, Alena; Helou, Walid; Collaboration: EUROfusion Consortium
2015-12-10
The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and
Circuit model of the ITER-like antenna for JET and simulation of its control algorithms
NASA Astrophysics Data System (ADS)
Durodié, Frédéric; Dumortier, Pierre; Helou, Walid; Křivská, Alena; Lerche, Ernesto
2015-12-01
The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and
Community detection based on modularity and an improved genetic algorithm
NASA Astrophysics Data System (ADS)
Shang, Ronghua; Bai, Jing; Jiao, Licheng; Jin, Chao
2013-03-01
Complex networks are widely applied in every aspect of human society, and community detection is a research hotspot in complex networks. Many algorithms use modularity as the objective function, which can simplify the algorithm. In this paper, a community detection method based on modularity and an improved genetic algorithm (MIGA) is put forward. MIGA takes the modularity Q as the objective function, which can simplify the algorithm, and uses prior information (the number of community structures), which makes the algorithm more targeted and improves the stability and accuracy of community detection. Meanwhile, MIGA takes the simulated annealing method as the local search method, which can improve the ability of local search by adjusting the parameters. Compared with the state-of-art algorithms, simulation results on computer-generated and four real-world networks reflect the effectiveness of MIGA.
Tsiros, I X; Dimopoulos, I F
2007-04-01
Soil temperature simulation is an important component in environmental modeling since it is involved in several aspects of pollutant transport and fate. This paper deals with the performance of the soil temperature simulation algorithms of the well-known environmental model PRZM. Model results are compared and evaluated based on the basis of its ability to predict in situ measured soil temperature profiles in an experimental plot during a 3-year monitoring study. The evaluation of the performance is based on linear regression statistics and typical model statistical errors such as the root mean square error (RMSE) and the normalized objective function (NOF). Results show that the model required minimal calibration to match the observed response of the system. Values of the determination coefficient R(2) were found to be in all cases around the value of 0.98 indicating a very good agreement between measured and simulated data. Values of the RMSE were found to be in the range of 1.2 to 1.4 degrees C, 1.1 to 1.4 degrees C, 0.9 to 1.1 degrees C, and 0.8 to 1.1 degrees C, for the examined 2, 5, 10 and 20 cm soil depths, respectively. Sensitivity analyses were also performed to investigate the influence of various factors involved in the energy balance equation at the ground surface on the soil temperature profiles. The results showed that the model was able to represent important processes affecting the soil temperature regime such as the combined effect of the heat transfer by convection between the ground surface and the atmosphere and the latent heat flux due to soil water evaporation. PMID:17454373
MixSim : An R Package for Simulating Data to Study Performance of Clustering Algorithms
Melnykov, Volodymyr; Chen, Wei-Chen; Maitra, Ranjan
2012-01-01
The R package MixSim is a new tool that allows simulating mixtures of Gaussian distributions with different levels of overlap between mixture components. Pairwise overlap, defined as a sum of two misclassification probabilities, measures the degree of interaction between components and can be readily employed to control the clustering complexity of datasets simulated from mixtures. These datasets can then be used for systematic performance investigation of clustering and finite mixture modeling algorithms. Among other capabilities of MixSim, there are computing the exact overlap for Gaussian mixtures, simulating Gaussian and non-Gaussian data, simulating outliers and noise variables, calculating various measures of agreement between two partitionings, and constructing parallel distribution plots for the graphical display of finite mixture models. All features of the package are illustrated in great detail. The utility of the package is highlighted through a small comparison study of several popular clustering algorithms.
Evaluation and optimization of lidar temperature analysis algorithms using simulated data
NASA Technical Reports Server (NTRS)
Leblanc, Thierry; McDermid, I. Stuart; Hauchecorne, Alain; Keckhut, Philippe
1998-01-01
The middle atmosphere (20 to 90 km altitude) ha received increasing interest from the scientific community during the last decades, especially since such problems as polar ozone depletion and climatic change have become so important. Temperature profiles have been obtained in this region using a variety of satellite-, rocket-, and balloon-borne instruments as well as some ground-based systems. One of the more promising of these instruments, especially for long-term high resolution measurements, is the lidar. Measurements of laser radiation Rayleigh backscattered, or Raman scattered, by atmospheric air molecules can be used to determine the relative air density profile and subsequently the temperature profile if it is assumed that the atmosphere is in hydrostatic equilibrium and follows the ideal gas law. The high vertical and spatial resolution make the lidar a well adapted instrument for the study of many middle atmospheric processes and phenomena as well as for the evaluation and validation of temperature measurements from satellites, such as the Upper Atmosphere Research Satellite (UARS). In the Network for Detection of Stratospheric Change (NDSC) lidar is the core instrument for measuring middle atmosphere temperature profiles. Using the best lidar analysis algorithm possible is therefore of crucial importance. In this work, the JPL and CNRS/SA lidar analysis software were evaluated. The results of this evaluation allowed the programs to be corrected and optimized and new production software versions were produced. First, a brief description of the lidar technique and the method used to simulate lidar raw-data profiles from a given temperature profile is presented. Evaluation and optimization of the JPL and CNRS/SA algorithms are then discussed.
NASA Astrophysics Data System (ADS)
Yue, Yingchao; Fan, Wenhui; Xiao, Tianyuan; Ma, Cheng
2013-07-01
High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.
NASA Astrophysics Data System (ADS)
Chen, Da-Ching; Yu, Tommy; Yao, Kung; Pottie, Gregory J.
1999-11-01
For single-input multiple-output (SIMO) systems blind deconvolution based on second-order statistics has been shown promising given that the sources and channels meet certain assumptions. In our previous paper we extend the work to multiple-input multiple-output (MIMO) systems by introducing a blind deconvolution algorithm to remove all channel dispersion followed by a blind decorrelation algorithm to separate different sources from their instantaneous mixture. In this paper we first explore more details embedded in our algorithm. Then we present simulation results to show that our algorithm is applicable to MIMO systems excited by a broad class of signals such as speech, music and digitally modulated symbols.
NASA Astrophysics Data System (ADS)
Densmore, J. D.; Park, H.; Wollaber, A. B.; Rauenzahn, R. M.; Knoll, D. A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption-emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck-Cummings algorithm.
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.
Thompson, Aidan P.; Schultz, Peter A.; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen M.; Tucker, Garritt J.
2014-09-01
This report summarizes the result of LDRD project 12-0395, titled %22Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations.%22 During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel
A comparison of heuristic search algorithms for molecular docking.
Westhead, D R; Clark, D E; Murray, C W
1997-05-01
This paper describes the implementation and comparison of four heuristic search algorithms (genetic algorithm, evolutionary programming, simulated annealing and tabu search) and a random search procedure for flexible molecular docking. To our knowledge, this is the first application of the tabu search algorithm in this area. The algorithms are compared using a recently described fast molecular recognition potential function and a diverse set of five protein-ligand systems. Statistical analysis of the results indicates that overall the genetic algorithm performs best in terms of the median energy of the solutions located. However, tabu search shows a better performance in terms of locating solutions close to the crystallographic ligand conformation. These results suggest that a hybrid search algorithm may give superior results to any of the algorithms alone. PMID:9263849