NASA Technical Reports Server (NTRS)
Knox, C. E.; Vicroy, D. D.; Scanlon, C.
1984-01-01
Simulation and flight tests were conducted to compare the accuracy of two algorithms designed to compute a position estimate with an airborne navigation computer. Both algorithms used ILS localizer and DME radio signals to compute a position difference vector to be used as an input to the navigation computer position estimate filter. The results of these tests show that the position estimate accuracy and response to artificially induced errors are improved when the position estimate is computed by an algorithm that geometrically combines DME and ILS localizer information to form a single component of error rather than by an algorithm that produces two independent components of error, one from a DMD input and the other from the ILS localizer input.
NASA Astrophysics Data System (ADS)
Noël, C.; Busegnies, Y.; Papalexandris, M. V.; Deledicque, V.; El Messoudi, A.
2007-08-01
Aims:This work presents a new hydrodynamical algorithm to study astrophysical detonations. A prime motivation of this development is the description of a carbon detonation in conditions relevant to superbursts, which are thought to result from the propagation of a detonation front around the surface of a neutron star in the carbon layer underlying the atmosphere. Methods: The algorithm we have developed is a finite-volume method inspired by the original MUSCL scheme of van Leer (1979). The algorithm is of second-order in the smooth part of the flow and avoids dimensional splitting. It is applied to some test cases, and the time-dependent results are compared to the corresponding steady state solution. Results: Our algorithm proves to be robust to test cases, and is considered to be reliably applicable to astrophysical detonations. The preliminary one-dimensional calculations we have performed demonstrate that the carbon detonation at the surface of a neutron star is a multiscale phenomenon. The length scale of liberation of energy is 106 times smaller than the total reaction length. We show that a multi-resolution approach can be used to solve all the reaction lengths. This result will be very useful in future multi-dimensional simulations. We present also thermodynamical and composition profiles after the passage of a detonation in a pure carbon or mixed carbon-iron layer, in thermodynamical conditions relevant to superbursts in pure helium accretor systems.
Simulation Results of the Huygens Probe Entry and Descent Trajectory Reconstruction Algorithm
NASA Technical Reports Server (NTRS)
Kazeminejad, B.; Atkinson, D. H.; Perez-Ayucar, M.
2005-01-01
Cassini/Huygens is a joint NASA/ESA mission to explore the Saturnian system. The ESA Huygens probe is scheduled to be released from the Cassini spacecraft on December 25, 2004, enter the atmosphere of Titan in January, 2005, and descend to Titan s surface using a sequence of different parachutes. To correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for "ground-truthing" Orbiter remote sensing measurements, it is essential that the probe entry and descent trajectory reconstruction be performed as early as possible in the postflight data analysis phase. The Huygens Descent Trajectory Working Group (DTWG), a subgroup of the Huygens Science Working Team (HSWT), is responsible for developing a methodology and performing the entry and descent trajectory reconstruction. This paper provides an outline of the trajectory reconstruction methodology, preliminary probe trajectory retrieval test results using a simulated synthetic Huygens dataset developed by the Huygens Project Scientist Team at ESA/ESTEC, and a discussion of strategies for recovery from possible instrument failure.
NASA Astrophysics Data System (ADS)
Roggemann, Michael C.; Welsh, Byron M.; Stone, Bradley R.; Su, Ting Ei
2002-02-01
Active laser-based electro-optical (EO) sensors on future aircraft and spacecraft will be used for a variety of missions and will be required to have a number of demanding technical characteristics. A key challenge to achieving these characteristics is the development of inexpensive, high degree of freedom optical wave front control devices, and the development of effective algorithms for controlling these devices. In this paper we present our research in the development of phase retrieval-based wave front control algorithms that can be used implemented with segmented liquid crystal-based wave front control devices. We have developed a wave front control algorithm that allows dynamic small-angle beam steering and shaping in the presence of an aberrating output window. Our approach is based on a phase retrieval algorithm to determine the optimal figure of a segmented wave front control device. Simulation and experimental results presented here show that this approach allows shaped far field patterns to be created and steered over small angles.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
NASA Technical Reports Server (NTRS)
Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.
2003-01-01
A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.
Fractal Landscape Algorithms for Environmental Simulations
NASA Astrophysics Data System (ADS)
Mao, H.; Moran, S.
2014-12-01
Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.
Formation Algorithms and Simulation Testbed
NASA Technical Reports Server (NTRS)
Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward
2004-01-01
Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
APL simulation of Grover's algorithm
NASA Astrophysics Data System (ADS)
Lipovaca, Samir
2012-02-01
Grover's algorithm is a fast quantum search algorithm. Classically, to solve the search problem for a search space of size N we need approximately N operations. Grover's algorithm offers a quadratic speedup. Since present quantum computers are not robust enough for code writing and execution, to experiment with Grover's algorithm, we will simulate it using the APL programming language. The APL programming language is especially suited for this task. For example, to compute Walsh-Hadamard transformation matrix for N quantum states via a tensor product of N Hadamard matrices we need to iterate N-1 times only one line of the code. Initial study indicates the quantum mechanical amplitude of the solution is almost independent of the search space size and rapidly reaches 0.999 values with slight variations at higher decimal places.
Preliminary results from MERIS Land Algorithm
NASA Astrophysics Data System (ADS)
Gobron, N.; Pinty, B.; Taberner, M.; Melin, F.; Verstraete, M. M.; Widlowski, J.-L.
2003-04-01
This paper presents a first and preliminary evaluation of the performance of the algorithm implemented in the Medium Resolution Imaging Spectrometer (MERIS) ground segment for assessing the status of land surfaces. First, we propose an updated version of the MERIS algorithm itself, which improves the accuracy of the product. Second, we analyze the first results by inter-comparing the MERIS Global Vegetation Index (MGVI) to similar products derived from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) that are generated at the European Commission Joint Research Center (EC-JRC). The first evaluation between MERIS and SeaWiFS derived products is made using data acquired on the same day by both instruments. The results show acceptable agreement and the differences are well understood by radiation transfer model simulations.
ICAAS piloted simulation results
NASA Astrophysics Data System (ADS)
Landy, R. J.; Halski, P. J.; Meyer, R. P.
1994-05-01
This paper reports piloted simulation results from the Integrated Control and Avionics for Air Superiority (ICAAS) piloted simulation evaluations. The program was to develop, integrate, and demonstrate critical technologies which will enable United States Air Force tactical fighter 'blue' aircraft to achieve superiority and survive when outnumbered by as much as four to one by enemy aircraft during air combat engagements. Primary emphasis was placed on beyond visual range (BVR) combat with provisions for effective transition to close-in combat. The ICAAS system was developed and tested in two stages. The first stage, called low risk ICAAS, was defined as employing aircraft and avionics technology with an initial operational date no later than 1995. The second stage, called medium risk ICAAS, was defined as employing aircraft and avionics technology with an initial operational date no later than 1998. Descriptions of the low risk and medium risk simulation configurations are given. Normalized (unclassified) results from both the low risk and medium risk ICAAS simulations are discussed. The results show the ICAAS system provided a significant improvement in air combat performance when compared to a current weapon system. Data are presented for both current generation and advanced fighter aircraft. The ICAAS technologies which are ready for flight testing in order to transition to the fighter fleet are described along with technologies needing additional development.
The systems biology simulation core algorithm
2013-01-01
Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941
Clutter discrimination algorithm simulation in pulse laser radar imaging
NASA Astrophysics Data System (ADS)
Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule
2015-10-01
Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.
A theoretical comparison of evolutionary algorithms and simulated annealing
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.
General simulation algorithm for autocorrelated binary processes
NASA Astrophysics Data System (ADS)
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
New Results in Astrodynamics Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.
1998-01-01
Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.
Wake Vortex Algorithm Scoring Results
NASA Technical Reports Server (NTRS)
Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)
2002-01-01
This report compares the performance of two models of trailing vortex evolution for which interaction with the ground is not a significant factor. One model uses eddy dissipation rate (EDR) and the other uses the kinetic energy of turbulence fluctuations (TKE) to represent the effect of turbulence. In other respects, the models are nearly identical. The models are evaluated by comparing their predictions of circulation decay, vertical descent, and lateral transport to observations for over four hundred cases from Memphis and Dallas/Fort Worth International Airports. These observations were obtained during deployments in support of NASA's Aircraft Vortex Spacing System (AVOSS). The results of the comparisons show that the EDR model usually performs slightly better than the TKE model.
Efficient algorithm for simulation of isoelectric focusing.
Yoo, Kisoo; Shim, Jaesool; Liu, Jin; Dutta, Prashanta
2014-03-01
IEF simulation is an effective tool to investigate the transport phenomena and separation performance as well as to design IEF microchip. However, multidimensional IEF simulations are computationally intensive as one has to solve a large number of mass conservation equations for ampholytes to simulate a realistic case. In this study, a parallel scheme for a 2D IEF simulation is developed to reduce the computational time. The calculation time for each equation is analyzed to identify which procedure is suitable for parallelization. As expected, simultaneous solution of mass conservation equations of ampholytes is identified as the computational hot spot, and the computational time can be significantly reduced by parallelizing the solution procedure for that. Moreover, to optimize the computing time, electric potential behavior during transient state is investigated. It is found that for a straight channel the transient variation of electric potential along the channel is negligible in a narrow pH range (5∼8) IEF. Thus the charge conservation equation is solved for the first time step only, and the electric potential obtain from that is used for subsequent calculations. IEF simulations are carried out using this algorithm for separation of cardiac troponin I from serum albumin in a pH range of 5-8 using 192 biprotic ampholytes. Significant reduction in simulation time is achieved using the parallel algorithm. We also study the effect of number of ampholytes to form the pH gradient and its effect in the focusing and separation behavior of cardiac troponin I and albumin. Our results show that, at the completion of separation phase, the pH profile is stepwise for lower number of ampholytes, but becomes smooth as the number of ampholytes increases. Numerical results also show that higher protein concentration can be obtained using higher number of ampholytes.
An exact accelerated stochastic simulation algorithm.
Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros
2009-04-14
An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 23 power of the number of reaction events in a Galton-Watson process.
A simulation algorithm for ultrasound liver backscattered signals.
Zatari, D; Botros, N; Dunn, F
1995-11-01
In this study, we present a simulation algorithm for the backscattered ultrasound signal from liver tissue. The algorithm simulates backscattered signals from normal liver and three different liver abnormalities. The performance of the algorithm has been tested by statistically comparing the simulated signals with corresponding signals obtained from a previous in vivo study. To verify that the simulated signals can be classified correctly we have applied a classification technique based on an artificial neural network. The acoustic features extracted from the spectrum over a 2.5 MHz bandwidth are the attenuation coefficient and the change of speed of sound with frequency (dispersion). Our results show that the algorithm performs satisfactorily. Further testing of the algorithm is conducted by the use of a data acquisition and analysis system designed by the authors, where several simulated signals are stored in memory chips and classified according to their abnormalities.
Algorithmic quantum simulation of memory effects
NASA Astrophysics Data System (ADS)
Alvarez-Rodriguez, U.; Di Candia, R.; Casanova, J.; Sanz, M.; Solano, E.
2017-02-01
We propose a method for the algorithmic quantum simulation of memory effects described by integrodifferential evolution equations. It consists in the systematic use of perturbation theory techniques and a Markovian quantum simulator. Our method aims to efficiently simulate both completely positive and nonpositive dynamics without the requirement of engineering non-Markovian environments. Finally, we find that small error bounds can be reached with polynomially scaling resources, evaluated as the time required for the simulation.
Selection of views to materialize using simulated annealing algorithms
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin
2002-03-01
A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.
2010-01-01
Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement
A splitting algorithm for Vlasov simulation with filamentation filtration
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Farrell, W. M.
1994-01-01
A Fourier-Fourier transformed version of the splitting algorithm for simulating solutions of the Vlasov-Poisson system of equations is introduced. It is shown that with the inclusion of filamentation filtration in this transformed algorithm it is both faster and more stable than the standard splitting algorithm. It is further shown that in a scalar computer environment this new algorithm is approximately equal in speed and far less noisy than its particle-in-cell counterpart. It is conjectured that in a multiprocessor environment the filtered splitting algorithm would be faster while producing more precise results.
Improved ant colony algorithm and its simulation study
NASA Astrophysics Data System (ADS)
Wang, Zongjiang
2013-03-01
Ant colony algorithm is development a new heuristic algorithm through simulation ant foraging. For its convergence rate slow, easy to fall into local optimal solution proposed for the adjustment of key parameters, pheromone update to improve the way and through the issue of TSP experiments, results showed that the improved algorithm has better overall search capabilities and demonstrated the feasibility and effectiveness of this method.
Variable neighbourhood simulated annealing algorithm for capacitated vehicle routing problems
NASA Astrophysics Data System (ADS)
Xiao, Yiyong; Zhao, Qiuhong; Kaku, Ikou; Mladenovic, Nenad
2014-04-01
This article presents the variable neighbourhood simulated annealing (VNSA) algorithm, a variant of the variable neighbourhood search (VNS) combined with simulated annealing (SA), for efficiently solving capacitated vehicle routing problems (CVRPs). In the new algorithm, the deterministic 'Move or not' criterion of the original VNS algorithm regarding the incumbent replacement is replaced by an SA probability, and the neighbourhood shifting of the original VNS (from near to far by k← k+1) is replaced by a neighbourhood shaking procedure following a specified rule. The geographical neighbourhood structure is introduced in constructing the neighbourhood structures for the CVRP of the string model. The proposed algorithm is tested against 39 well-known benchmark CVRP instances of different scales (small/middle, large, very large). The results show that the VNSA algorithm outperforms most existing algorithms in terms of computational effectiveness and efficiency, showing good performance in solving large and very large CVRPs.
Quantitative tomography simulations and reconstruction algorithms
Martz, H E; Aufderheide, M B; Goodman, D; Schach von Wittenau, A; Logan, C; Hall, J; Jackson, J; Slone, D
2000-11-01
X-ray, neutron and proton transmission radiography and computed tomography (CT) are important diagnostic tools that are at the heart of LLNLs effort to meet the goals of the DOE's Advanced Radiography Campaign. This campaign seeks to improve radiographic simulation and analysis so that radiography can be a useful quantitative diagnostic tool for stockpile stewardship. Current radiographic accuracy does not allow satisfactory separation of experimental effects from the true features of an object's tomographically reconstructed image. This can lead to difficult and sometimes incorrect interpretation of the results. By improving our ability to simulate the whole radiographic and CT system, it will be possible to examine the contribution of system components to various experimental effects, with the goal of removing or reducing them. In this project, we are merging this simulation capability with a maximum-likelihood (constrained-conjugate-gradient-CCG) reconstruction technique yielding a physics-based, forward-model image-reconstruction code. In addition, we seek to improve the accuracy of computed tomography from transmission radiographs by studying what physics is needed in the forward model. During FY 2000, an improved version of the LLNL ray-tracing code called HADES has been coupled with a recently developed LLNL CT algorithm known as CCG. The problem of image reconstruction is expressed as a large matrix equation relating a model for the object being reconstructed to its projections (radiographs). Using a constrained-conjugate-gradient search algorithm, a maximum likelihood solution is sought. This search continues until the difference between the input measured radiographs or projections and the simulated or calculated projections is satisfactorily small. We developed a 2D HADES-CCG CT code that uses full ray-tracing simulations from HADES as the projector. Often an object has axial symmetry and it is desirable to reconstruct into a 2D r-z mesh with a limited
An exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros
2009-04-01
An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2/3 power of the number of reaction events in a Galton-Watson process.
An exact accelerated stochastic simulation algorithm
Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros
2009-01-01
An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present “ER-leap” algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2∕3 power of the number of reaction events in a Galton–Watson process. PMID:19368432
Open cherry picker simulation results
NASA Technical Reports Server (NTRS)
Nathan, C. A.
1982-01-01
The simulation program associated with a key piece of support equipment to be used to service satellites directly from the Shuttle is assessed. The Open Cherry Picker (OCP) is a manned platform mounted at the end of the remote manipulator system (RMS) and is used to enhance extra vehicular activities (EVA). The results of simulations performed on the Grumman Large Amplitude Space Simulator (LASS) and at the JSC Water Immersion Facility are summarized.
Genetic Algorithms for Digital Quantum Simulations.
Las Heras, U; Alvarez-Rodriguez, U; Solano, E; Sanz, M
2016-06-10
We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.
A hierarchical exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Orendorff, David; Mjolsness, Eric
2012-12-01
A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.
Extrapolated gradientlike algorithms for molecular dynamics and celestial mechanics simulations.
Omelyan, I P
2006-09-01
A class of symplectic algorithms is introduced to integrate the equations of motion in many-body systems. The algorithms are derived on the basis of an advanced gradientlike decomposition approach. Its main advantage over the standard gradient scheme is the avoidance of time-consuming evaluations of force gradients by force extrapolation without any loss of precision. As a result, the efficiency of the integration improves significantly. The algorithms obtained are analyzed and optimized using an error-function theory. The best among them are tested in actual molecular dynamics and celestial mechanics simulations for comparison with well-known nongradient and gradient algorithms such as the Störmer-Verlet, Runge-Kutta, Cowell-Numerov, Forest-Ruth, Suzuki-Chin, and others. It is demonstrated that for moderate and high accuracy, the extrapolated algorithms should be considered as the most efficient for the integration of motion in molecular dynamics simulations.
Thermodynamics of supersaturated steam: Molecular simulation results
NASA Astrophysics Data System (ADS)
Moučka, Filip; Nezbeda, Ivo
2016-12-01
Supersaturated steam modeled by the Gaussian charge polarizable model [P. Paricaud, M. Předota, and A. A. Chialvo, J. Chem. Phys. 122, 244511 (2005)] and BK3 model [P. Kiss and A. Baranyai, J. Chem. Phys. 138, 204507 (2013)] has been simulated at conditions occurring in steam turbines using the multiple-particle-move Monte Carlo for both the homogeneous phase and also implemented for the Gibbs ensemble Monte Carlo molecular simulation methods. Because of these thermodynamic conditions, a specific simulation algorithm has been developed to bypass common simulation problems resulting from very low densities of steam and cluster formation therein. In addition to pressure-temperature-density and orthobaric data, the distribution of clusters has also been evaluated. The obtained extensive data of high precision should serve as a basis for development of reliable molecular-based equations for properties of metastable steam.
NASA Astrophysics Data System (ADS)
Lampoudi, Sotiria; Gillespie, Dan T.; Petzold, Linda R.
2009-03-01
The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.
Computational plasticity algorithm for particle dynamics simulations
NASA Astrophysics Data System (ADS)
Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.
2017-03-01
The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.
Computational algorithms for simulations in atmospheric optics.
Konyaev, P A; Lukin, V P
2016-04-20
A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.
Comparative testing of DNA segmentation algorithms using benchmark simulations.
Elhaik, Eran; Graur, Dan; Josic, Kresimir
2010-05-01
Numerous segmentation methods for the detection of compositionally homogeneous domains within genomic sequences have been proposed. Unfortunately, these methods yield inconsistent results. Here, we present a benchmark consisting of two sets of simulated genomic sequences for testing the performances of segmentation algorithms. Sequences in the first set are composed of fixed-sized homogeneous domains, distinct in their between-domain guanine and cytosine (GC) content variability. The sequences in the second set are composed of a mosaic of many short domains and a few long ones, distinguished by sharp GC content boundaries between neighboring domains. We use these sets to test the performance of seven segmentation algorithms in the literature. Our results show that recursive segmentation algorithms based on the Jensen-Shannon divergence outperform all other algorithms. However, even these algorithms perform poorly in certain instances because of the arbitrary choice of a segmentation-stopping criterion.
Simulation of the Galileo spacecraft axial - Delta-V algorithm
NASA Technical Reports Server (NTRS)
Longuski, J. M.
1983-01-01
Preliminary results are presented from the analysis of the Galileo spacecraft axial delta-V algorithm. The Galileo spacecraft is a dual spin interplanetary spacecraft which will study the four Galilean moons of Jupiter as well as the Jovian environment and atmosphere. In order to achieve orbit about Jupiter and accurately deliver the probe to the planet's upper atmosphere, the Galileo spacecraft must be capable of performing many trajectory corrections or delta-V maneuvers. Twelve 10 Newton thrusters and one 400 Newton engine are utilized for this purpose. There are many maneuver modes and control algorithms available to the spacecraft. In this paper only the analysis of the axial delta-V algorithm will be discussed. The analysis consists of two parts: an analytic study and a simulation study. The analytic results are based on rigid body dynamics, while the simulation includes the first order effect of the flexible magnetometer boom and nutation damper. The simulation utilizes a program developed at JPL which allows flexible body effects to be simulated by modeling a collection of rigid bodies attached together by hinges, springs and dampers. In this preliminary study of the Galileo only two rigid bodies were used in the simulation, but many more can and will be used in the final tests. In this analysis, the algorithm appears to be working correctly and the analytic and simulation results agree very well.
Simulations of optical autofocus algorithms based on PGA in SAIL
NASA Astrophysics Data System (ADS)
Xu, Nan; Liu, Liren; Xu, Qian; Zhou, Yu; Sun, Jianfeng
2011-09-01
The phase perturbations due to propagation effects can destroy the high resolution imagery of Synthetic Aperture Imaging Ladar (SAIL). Some autofocus algorithms for Synthetic Aperture Radar (SAR) were developed and implemented. Phase Gradient Algorithm (PGA) is a well-known one for its robustness and wide application, and Phase Curvature Algorithm (PCA) as a similar algorithm expands its applied field to strip map mode. In this paper the autofocus algorithms utilized in optical frequency domain are proposed, including optical PGA and PCA respectively implemented in spotlight and strip map mode. Firstly, the mathematical flows of optical PGA and PCA in SAIL are derived. The simulations model of the airborne SAIL is established, and the compensation simulations of the synthetic aperture laser images corrupted by the random errors, linear phase errors and quadratic phase errors are executed. The compensation effect and the cycle index of the simulation are discussed. The simulation results show that both the two optical autofocus algorithms are effective while the optical PGA outperforms the optical PCA, which keeps consistency with the theory.
Combined simulated annealing algorithm for the discrete facility location problem.
Qin, Jin; Ni, Ling-Lin; Shi, Feng
2012-01-01
The combined simulated annealing (CSA) algorithm was developed for the discrete facility location problem (DFLP) in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.
Concluding Report: Quantitative Tomography Simulations and Reconstruction Algorithms
Aufderheide, M B; Martz, H E; Slone, D M; Jackson, J A; Schach von Wittenau, A E; Goodman, D M; Logan, C M; Hall, J M
2002-02-01
In this report we describe the original goals and final achievements of this Laboratory Directed Research and Development project. The Quantitative was Tomography Simulations and Reconstruction Algorithms project (99-ERD-015) funded as a multi-directorate, three-year effort to advance the state of the art in radiographic simulation and tomographic reconstruction by improving simulation and including this simulation in the tomographic reconstruction process. Goals were to improve the accuracy of radiographic simulation, and to couple advanced radiographic simulation tools with a robust, many-variable optimization algorithm. In this project, we were able to demonstrate accuracy in X-Ray simulation at the 2% level, which is an improvement of roughly a factor of 5 in accuracy, and we have successfully coupled our simulation tools with the CCG (Constrained Conjugate Gradient) optimization algorithm, allowing reconstructions that include spectral effects and blurring in the reconstructions. Another result of the project was the assembly of a low-scatter X-Ray imaging facility for use in nondestructive evaluation applications. We conclude with a discussion of future work.
Fast computation algorithms for speckle pattern simulation
Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru
2013-11-13
We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.
Cluster hybrid Monte Carlo simulation algorithms.
Plascak, J A; Ferrenberg, Alan M; Landau, D P
2002-06-01
We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.
Cluster hybrid Monte Carlo simulation algorithms
NASA Astrophysics Data System (ADS)
Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.
2002-06-01
We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.
Parallel algorithm strategies for circuit simulation.
Thornquist, Heidi K.; Schiek, Richard Louis; Keiter, Eric Richard
2010-01-01
Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. However, they have been pushed to their performance limits in addressing circuit design challenges that come from the technology drivers of smaller feature scales and higher integration. Improving the performance of circuit simulation tools through exploiting new opportunities in widely-available multi-processor architectures is a logical next step. Unfortunately, not all traditional simulation applications are inherently parallel, and quickly adapting mature application codes (even codes designed to parallel applications) to new parallel paradigms can be prohibitively difficult. In general, performance is influenced by many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, the use of mini-applications small self-contained proxies for real applications is an excellent approach for rapidly exploring the parameter space of all these choices. In this report we present a multi-core performance study of Xyce, a transistor-level circuit simulation tool, and describe the future development of a mini-application for circuit simulation.
Central line simulation: a new training algorithm.
Britt, Rebecca C; Reed, Scott F; Britt, L D
2007-07-01
Recent development of a partial task simulator for central line placement has altered the training algorithm from one of supervised learning on patients to mannequin-based practice to proficiency before patient interaction. There are little data published on the efficacy of this type of simulator. We reviewed our initial resident experience with central line simulation. Education to proficiency using the CentralLine Man simulator is completed by all interns during orientation. At the completion of training, te residents were asked to complete a voluntary, anonymous questionnaire with a 5-point Likert scale as well as open-ended questions. Additionally, the residents were asked to maintain a log of the initial 10 central lines placed. Retrospective review of the questionnaire and logs were done with analysis of simulator experience as well as initial line experience. Seventeen trainees completed the central line simulation course and returned the initial survey. Before the course, the trainees had placed an average of 0.4 internal jugular (IJ) and 1 subclavian (SC) line. On the simulator, an average of 3 SC attempts and 2.5 IJ attempts led to resident comfort with the procedure. On the first attempt, the vessel was accessed after an average of 1.5 SC and 1.9 IJ needlesticks, which improved to 1 SC and 1.3 IJ by the fifth simulated attempt. A total of 4 pneumothorax and 5 carotid sticks were done. Overall, the residents were highly satisfied with the course with an average score of 4.8 for didactics, 4.8 for equipment, 4.5 for the mannequin, and 4.8 for practice opportunity. Nine of the 11 residents who completed logs felt the simulation improved performance on the patient. On the first patient attempt, an average of 1.8 needlesticks was done with an average of 1.3 by the tenth line. For the first patient line documented in the logs, comfort with the anatomy was rated 3.8 with comfort with the procedure rated 2.8. Central line simulation before actual performance on
Efficient algorithms for wildland fire simulation
NASA Astrophysics Data System (ADS)
Kondratenko, Volodymyr Y.
In this dissertation, we develop the multiple-source shortest path algorithms and examine their application importance in real world problems, such as wildfire modeling. The theoretical basis and its implementation in the Weather Research Forecasting (WRF) model coupled with the fire spread code SFIRE (WRF-SFIRE model) are described. We present a data assimilation method that gives the fire spread model the ability to start the fire simulation from an observed fire perimeter instead of an ignition point. While the model is running, the fire state in the model changes in accordance with the new arriving data by data assimilation. As the fire state changes, the atmospheric state (which is strongly effected by heat flux) does not stay consistent with the fire state. The main difficulty of this methodology occurs in coupled fire-atmosphere models, because once the fire state is modified to match a given starting perimeter, the atmospheric circulation is no longer in sync with it. One of the possible solutions to this problem is a formation of the artificial time of ignition history from an earlier fire state, which is later used to replay the fire progression to the new perimeter with the proper heat fluxes fed into the atmosphere, so that the fire induced circulation is established. In this work, we develop efficient algorithms that start from the fire arrival times given at the set of points (called a perimeter) and create the artificial fire time of ignition and fire spread rate history. Different algorithms were developed in order to suit possible demands of the user, such as implementation in parallel programming, minimization of the required amount of iterations and memory use, and use of the rate of spread as a time dependent variable. For the algorithms that deal with the homogeneous rate of spread, it was proven that the values of fire arrival times they produce are optimal. It was also shown that starting from arbitrary initial state the algorithms have
Rayleigh wave inversion using heat-bath simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Lu, Yongxu; Peng, Suping; Du, Wenfeng; Zhang, Xiaoyang; Ma, Zhenyuan; Lin, Peng
2016-11-01
The dispersion of Rayleigh waves can be used to obtain near-surface shear (S)-wave velocity profiles. This is performed mainly by inversion of the phase velocity dispersion curves, which has been proven to be a highly nonlinear and multimodal problem, and it is unsuitable to use local search methods (LSMs) as the inversion algorithm. In this study, a new strategy is proposed based on a variant of simulated annealing (SA) algorithm. SA, which simulates the annealing procedure of crystalline solids in nature, is one of the global search methods (GSMs). There are many variants of SA, most of which contain two steps: the perturbation of model and the Metropolis-criterion-based acceptance of the new model. In this paper we propose a one-step SA variant known as heat-bath SA. To test the performance of the heat-bath SA, two models are created. Both noise-free and noisy synthetic data are generated. Levenberg-Marquardt (LM) algorithm and a variant of SA, known as the fast simulated annealing (FSA) algorithm, are also adopted for comparison. The inverted results of the synthetic data show that the heat-bath SA algorithm is a reasonable choice for Rayleigh wave dispersion curve inversion. Finally, a real-world inversion example from a coal mine in northwestern China is shown, which proves that the scheme we propose is applicable.
Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"
Petzold, Linda R.
2012-10-25
Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.
A fast MPP algorithm for Ising spin exchange simulations
NASA Technical Reports Server (NTRS)
Sullivan, Francis; Mountain, Raymond D.
1987-01-01
A very efficient massively parallel processor (MPP) algorithm is described for performing one important class of Ising spin simulations. Results and physical significance of MPP calculations using the method described is discussed elsewhere. A few comments, however, are made on the problem under study and results so far are reported. Ted Einstein provided guidance in interpreting the initial results and in suggesting calculations to perform.
Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.
1997-01-01
The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.
Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua
2014-03-01
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua
2014-03-01
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao
2014-03-15
This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
Adaptively resizing populations: Algorithm, analysis, and first results
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Smuda, Ellen
1993-01-01
Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.
Summarizing Simulation Results using Causally-relevant States
Parikh, Nidhi; Marathe, Madhav; Swarup, Samarth
2016-01-01
As increasingly large-scale multiagent simulations are being implemented, new methods are becoming necessary to make sense of the results of these simulations. Even concisely summarizing the results of a given simulation run is a challenge. Here we pose this as the problem of simulation summarization: how to extract the causally-relevant descriptions of the trajectories of the agents in the simulation. We present a simple algorithm to compress agent trajectories through state space by identifying the state transitions which are relevant to determining the distribution of outcomes at the end of the simulation. We present a toy-example to illustrate the working of the algorithm, and then apply it to a complex simulation of a major disaster in an urban area. PMID:28042620
The Aquarius Salinity Retrieval Algorithm: Early Results
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank J.; Lagerloef, Gary; LeVine, David
2012-01-01
The Aquarius L-band radiometer/scatterometer system is designed to provide monthly salinity maps at 150 km spatial scale to a 0.2 psu accuracy. The sensor was launched on June 10, 2011, aboard the Argentine CONAE SAC-D spacecraft. The L-band radiometers and the scatterometer have been taking science data observations since August 25, 2011. The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to O2, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind. This is based on the radar backscatter measurements by the scatterometer. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water and an auxiliary field for the sea surface temperature. In the current processing (as of writing this abstract) only v-pol TB are used for this last process and NCEP winds are used for the roughness correction. Before the salinity algorithm can be operationally implemented and its accuracy assessed by comparing versus in situ measurements, an extensive calibration and validation
Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.
2012-01-01
Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.
Simulated annealing algorithm applied in adaptive near field beam shaping
NASA Astrophysics Data System (ADS)
Yu, Zhan; Ma, Hao-tong; Du, Shao-jun
2010-11-01
Laser beam shaping is required in many applications for improving the efficiency of the laser systems. In this paper, the near field beam shaping based on the combination of simulated annealing algorithm and Zernike polynomials is demonstrated. Considering phase distribution can be represented by the expansion of Zernike polynomials, the problem of searching appropriate phase distribution can be changed into a problem of optimizing a vector made up of Zernike coefficients. The feasibility of this method is validated theoretically by translating the Gaussian beam into square quasi-flattop beam in the near field. Finally, the closed control loop system constituted by phase only liquid crystal spatial light modulator and simulated annealing algorithm is used to prove the validity of the technique. The experiment results show that the system can generate laser beam with desired intensity distributions.
Potts-model grain growth simulations: Parallel algorithms and applications
Wright, S.A.; Plimpton, S.J.; Swiler, T.P.
1997-08-01
Microstructural morphology and grain boundary properties often control the service properties of engineered materials. This report uses the Potts-model to simulate the development of microstructures in realistic materials. Three areas of microstructural morphology simulations were studied. They include the development of massively parallel algorithms for Potts-model grain grow simulations, modeling of mass transport via diffusion in these simulated microstructures, and the development of a gradient-dependent Hamiltonian to simulate columnar grain growth. Potts grain growth models for massively parallel supercomputers were developed for the conventional Potts-model in both two and three dimensions. Simulations using these parallel codes showed self similar grain growth and no finite size effects for previously unapproachable large scale problems. In addition, new enhancements to the conventional Metropolis algorithm used in the Potts-model were developed to accelerate the calculations. These techniques enable both the sequential and parallel algorithms to run faster and use essentially an infinite number of grain orientation values to avoid non-physical grain coalescence events. Mass transport phenomena in polycrystalline materials were studied in two dimensions using numerical diffusion techniques on microstructures generated using the Potts-model. The results of the mass transport modeling showed excellent quantitative agreement with one dimensional diffusion problems, however the results also suggest that transient multi-dimension diffusion effects cannot be parameterized as the product of the grain boundary diffusion coefficient and the grain boundary width. Instead, both properties are required. Gradient-dependent grain growth mechanisms were included in the Potts-model by adding an extra term to the Hamiltonian. Under normal grain growth, the primary driving term is the curvature of the grain boundary, which is included in the standard Potts-model Hamiltonian.
Bio-inspired algorithms applied to molecular docking simulations.
Heberlé, G; de Azevedo, W F
2011-01-01
Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.
Kriging-approximation simulated annealing algorithm for groundwater modeling
NASA Astrophysics Data System (ADS)
Shen, C. H.
2015-12-01
Optimization algorithms are often applied to search best parameters for complex groundwater models. Running the complex groundwater models to evaluate objective function might be time-consuming. This research proposes a Kriging-approximation simulated annealing algorithm. Kriging is a spatial statistics method used to interpolate unknown variables based on surrounding given data. In the algorithm, Kriging method is used to estimate complicate objective function and is incorporated with simulated annealing. The contribution of the Kriging-approximation simulated annealing algorithm is to reduce calculation time and increase efficiency.
An improved simulated annealing algorithm for standard cell placement
NASA Technical Reports Server (NTRS)
Jones, Mark; Banerjee, Prithviraj
1988-01-01
Simulated annealing is a general purpose Monte Carlo optimization technique that was applied to the problem of placing standard logic cells in a VLSI ship so that the total interconnection wire length is minimized. An improved standard cell placement algorithm that takes advantage of the performance enhancements that appear to come from parallelizing the uniprocessor simulated annealing algorithm is presented. An outline of this algorithm is given.
Daylighting simulation: methods, algorithms, and resources
Carroll, William L.
1999-12-01
This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of
Motion Cueing Algorithm Modification for Improved Turbulence Simulation
NASA Technical Reports Server (NTRS)
Ercole, Anthony V.; Cardullo, Frank M.; Zaychik, Kirill; Kelly, Lon C.; Houck, Jacob
2009-01-01
Atmospheric turbulence cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. Cardullo and Ellor initially addressed this problem by directly porting the turbulence model output to the motion system. Reid and Robinson addressed the problem by employing a parallel aircraft model, which is only stimulated by the turbulence inputs and adding a filter specially designed to pass the higher turbulence frequencies. There have been advances in motion cueing algorithm development at the Man-Machine Systems Laboratory, at SUNY Binghamton. In particular, the system used to generate turbulence cues has been studied. The Reid approach, implemented by Telban and Cardullo, was employed to augment the optimal motion cueing algorithm installed at the NASA LaRC Simulation Laboratory, driving the Visual Motion Simulator. In this implementation, the output of the primary flight channel was added to the output of the turbulence channel and then sent through a non-linear cueing filter. The cueing filter is an adaptive filter; therefore, it is not desirable for the output of the turbulence channel to be augmented by this type of filter. The likelihood of the signal becoming divergent was also an issue in this design. After testing on-site it became apparent that the architecture of the turbulence algorithm was generating unacceptable cues. As mentioned above, this cueing algorithm comprised a filter that was designed to operate at low bandwidth. Therefore, the turbulence was also filtered, augmenting the cues generated by the model. If any filtering is to be done to the turbulence, it will utilize a filter with a much higher bandwidth, above the frequencies produced by the aircraft response to turbulence. The authors have developed an implementation wherein only the signal from the primary flight channel passes through the nonlinear cueing filter. This paper discusses three
Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.
1997-01-01
The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.
Control algorithm for multiscale flow simulations of water
NASA Astrophysics Data System (ADS)
Kotsalis, Evangelos M.; Walther, Jens H.; Kaxiras, Efthimios; Koumoutsakos, Petros
2009-04-01
We present a multiscale algorithm to couple atomistic water models with continuum incompressible flow simulations via a Schwarz domain decomposition approach. The coupling introduces an inhomogeneity in the description of the atomistic domain and prevents the use of periodic boundary conditions. The use of a mass conserving specular wall results in turn to spurious oscillations in the density profile of the atomistic description of water. These oscillations can be eliminated by using an external boundary force that effectively accounts for the virial component of the pressure. In this Rapid Communication, we extend a control algorithm, previously introduced for monatomic molecules, to the case of atomistic water and demonstrate the effectiveness of this approach. The proposed computational method is validated for the cases of equilibrium and Couette flow of water.
Control algorithm for multiscale flow simulations of water.
Kotsalis, Evangelos M; Walther, Jens H; Kaxiras, Efthimios; Koumoutsakos, Petros
2009-04-01
We present a multiscale algorithm to couple atomistic water models with continuum incompressible flow simulations via a Schwarz domain decomposition approach. The coupling introduces an inhomogeneity in the description of the atomistic domain and prevents the use of periodic boundary conditions. The use of a mass conserving specular wall results in turn to spurious oscillations in the density profile of the atomistic description of water. These oscillations can be eliminated by using an external boundary force that effectively accounts for the virial component of the pressure. In this Rapid Communication, we extend a control algorithm, previously introduced for monatomic molecules, to the case of atomistic water and demonstrate the effectiveness of this approach. The proposed computational method is validated for the cases of equilibrium and Couette flow of water.
D-leaping: Accelerating stochastic simulation algorithms for reactions with delays
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2009-09-01
We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.
Adaptive Mesh and Algorithm Refinement Using Direct Simulation Monte Carlo
NASA Astrophysics Data System (ADS)
Garcia, Alejandro L.; Bell, John B.; Crutchfield, William Y.; Alder, Berni J.
1999-09-01
Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.
Duality quantum algorithm efficiently simulates open quantum systems
NASA Astrophysics Data System (ADS)
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-07-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm.
Duality quantum algorithm efficiently simulates open quantum systems.
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-07-28
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d(3)) in contrast to O(d(4)) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm.
LAWS simulation: Sampling strategies and wind computation algorithms
NASA Technical Reports Server (NTRS)
Emmitt, G. D. A.; Wood, S. A.; Houston, S. H.
1989-01-01
In general, work has continued on developing and evaluating algorithms designed to manage the Laser Atmospheric Wind Sounder (LAWS) lidar pulses and to compute the horizontal wind vectors from the line-of-sight (LOS) measurements. These efforts fall into three categories: Improvements to the shot management and multi-pair algorithms (SMA/MPA); observing system simulation experiments; and ground-based simulations of LAWS.
Dynamic damping control: Implementation issues and simulation results
Anderson, R.J.
1989-01-01
Computed torque algorithms are used to compensate for the changing dynamics of robot manipulators in order to ensure that a constant level of damping is maintained for all configurations. Unfortunately, there are three significant problems with existing computed torque algorithms. First, they are nonpassive and can lead to unstable behavior; second, they make inefficient use of actuator capability; and third, they cannot be used to maintain a constant end-effector stiffness for force control tasks. Recently, we introduced a new control algorithm for robots which, like computed torque, uses a model of the manipulator's dynamics to maintain a constant level of damping in the system, but does so passively. This new class of passive control algorithms has guaranteed stability properties, utilizes actuators more effectively, and can also be used to maintain constant end-effector stiffness. In this paper, this approach is described in detail, implementation issues are discussed, and simulation results are given. 15 refs., 6 figs., 2 tabs.
NASA Technical Reports Server (NTRS)
Krosel, S. M.; Milner, E. J.
1982-01-01
The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.
Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Gatski, Thomas B.
1997-01-01
A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.
Multidiscontinuity algorithm for world-line Monte Carlo simulations.
Kato, Yasuyuki
2013-01-01
We introduce a multidiscontinuity algorithm for the efficient global update of world-line configurations in Monte Carlo simulations of interacting quantum systems. This algorithm is a generalization of the two-discontinuity algorithms introduced in Refs. [N. Prokof'ev, B. Svistunov, and I. Tupitsyn, Phys. Lett. A 238, 253 (1998)] and [O. F. Syljuåsen and A. W. Sandvik, Phys. Rev. E 66, 046701 (2002)]. This generalization is particularly effective for studying Bose-Einstein condensates (BECs) of composite particles. In particular, we demonstrate the utility of the generalized algorithm by simulating a Hamiltonian for an S=1 antiferromagnet with strong uniaxial single-ion anisotropy. The multidiscontinuity algorithm not only solves the freezing problem that arises in this limit, but also allows the efficient computing of the off-diagonal correlator that characterizes a BEC of composite particles.
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
Fully explicit algorithms for fluid simulation
NASA Astrophysics Data System (ADS)
Clausen, Jonathan
2011-11-01
Computing hardware is trending towards distributed, massively parallel architectures in order to achieve high computational throughput. For example, Intrepid at Argonne uses 163,840 cores, and next generation machines, such as Sequoia at Lawrence Livermore, will use over one million cores. Harnessing the increasingly parallel nature of computational resources will require algorithms that scale efficiently on these architectures. The advent of GPU-based computation will serve to accelerate this behavior, as a single GPU contains hundreds of processor ``cores.'' Explicit algorithms avoid the communication associated with a linear solve, thus parallel scalability of these algorithms is typically high. This work will explore the efficiency and accuracy of three explicit solution methodologies for the Navier-Stokes equations: traditional artificial compressibility schemes, the lattice-Boltzmann method, and the recently proposed kinetically reduced local Navier-Stokes equations [Borok, Ansumali, and Karlin (2007)]. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Domain splitting algorithms for the Li-ion battery simulation
NASA Astrophysics Data System (ADS)
Iliev, O.; Zakharov, P. E.
2016-11-01
Numerical simulation of electrochemical processes in rechargeable batteries has important applications in an energy technology. In this paper we have developed and compared three domain splitting algorithms for the Li-ion battery simulation. Li-ion battery simulation is based on microscopic model, which contains nonlinear equations for Li-ion concentration and potential. On the interface of electrodes and electrolyte the Lithium ions intercalation are described by nonlinear equation. This nonlinear interface condition affects the Newton's method iterations and computation time. To simplify numerical simulations we use domain splitting algorithms, which split the original problem into three independent subproblems in two electrodes and electrolyte. We investigate the numerical convergence and efficiency of the algorithms on a 2D model problem.
The VIIRS Ocean Data Simulator Enhancements and Results
NASA Technical Reports Server (NTRS)
Robinson, Wayne D.; Patt, Fredrick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.
2011-01-01
The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.
Milestone M4900: Simulant Mixing Analytical Results
Kaplan, D.I.
2001-07-26
This report addresses Milestone M4900, ''Simulant Mixing Sample Analysis Results,'' and contains the data generated during the ''Mixing of Process Heels, Process Solutions, and Recycle Streams: Small-Scale Simulant'' task. The Task Technical and Quality Assurance Plan for this task is BNF-003-98-0079A. A report with a narrative description and discussion of the data will be issued separately.
An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming
2017-02-01
In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.
Convergence Results on Iteration Algorithms to Linear Systems
Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo
2014-01-01
In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640
Research on coal-mine gas monitoring system controlled by annealing simulating algorithm
NASA Astrophysics Data System (ADS)
Zhou, Mengran; Li, Zhenbi
2007-12-01
This paper introduces the principle and schematic diagram of gas monitoring system by means of infrared method. Annealing simulating algorithm is adopted to find the whole optimum solution and the Metroplis criterion is used to make iterative algorithm combination optimization by control parameter decreasing aiming at solving large-scale combination optimization problem. Experiment result obtained by the performing scheme of realizing algorithm training and flow of realizing algorithm training indicates that annealing simulating algorithm applied to identify gas is better than traditional linear local search method. It makes the algorithm iterate to the optimum value rapidly so that the quality of the solution is improved efficiently. The CPU time is shortened and the identifying rate of gas is increased. For the mines with much-gas gushing fatalness the regional danger and disaster advanced forecast can be realized. The reliability of coal-mine safety is improved.
An algorithm to build mock galaxy catalogues using MICE simulations
NASA Astrophysics Data System (ADS)
Carretero, J.; Castander, F. J.; Gaztañaga, E.; Crocce, M.; Fosalba, P.
2015-02-01
We present a method to build mock galaxy catalogues starting from a halo catalogue that uses halo occupation distribution (HOD) recipes as well as the subhalo abundance matching (SHAM) technique. Combining both prescriptions we are able to push the absolute magnitude of the resulting catalogue to fainter luminosities than using just the SHAM technique and can interpret our results in terms of the HOD modelling. We optimize the method by populating with galaxies friends-of-friends dark matter haloes extracted from the Marenostrum Institut de Ciències de l'Espai dark matter simulations and comparing them to observational constraints. Our resulting mock galaxy catalogues manage to reproduce the observed local galaxy luminosity function and the colour-magnitude distribution as observed by the Sloan Digital Sky Survey. They also reproduce the observed galaxy clustering properties as a function of luminosity and colour. In order to achieve that, the algorithm also includes scatter in the halo mass-galaxy luminosity relation derived from direct SHAM and a modified Navarro-Frenk-White mass density profile to place satellite galaxies in their host dark matter haloes. Improving on general usage of the HOD that fits the clustering for given magnitude limited samples, our catalogues are constructed to fit observations at all luminosities considered and therefore for any luminosity subsample. Overall, our algorithm is an economic procedure of obtaining galaxy mock catalogues down to faint magnitudes that are necessary to understand and interpret galaxy surveys.
X-ray simulation algorithms used in ISP
Sullivan, John P.
2016-07-29
ISP is a simulation code which is sometimes used in the USNDS program. ISP is maintained by Sandia National Lab. However, the X-ray simulation algorithm used by ISP was written by scientists at LANL – mainly by Ed Fenimore with some contributions from John Sullivan and George Neuschaefer and probably others. In email to John Sullivan on July 25, 2016, Jill Rivera, ISP project lead, said “ISP uses the function xdosemeters_sim from the xgen library.” The is a fortran subroutine which is also used to simulate the X-ray response in consim (a descendant of xgen). Therefore, no separate documentation of the X-ray simulation algorithms in ISP have been written – the documentation for the consim simulation can be used.
Improved delay-leaping simulation algorithm for biochemical reaction systems with delays
NASA Astrophysics Data System (ADS)
Yi, Na; Zhuang, Gang; Da, Liang; Wang, Yifei
2012-04-01
In biochemical reaction systems dominated by delays, the simulation speed of the stochastic simulation algorithm depends on the size of the wait queue. As a result, it is important to control the size of the wait queue to improve the efficiency of the simulation. An improved accelerated delay stochastic simulation algorithm for biochemical reaction systems with delays, termed the improved delay-leaping algorithm, is proposed in this paper. The update method for the wait queue is effective in reducing the size of the queue as well as shortening the storage and access time, thereby accelerating the simulation speed. Numerical simulation on two examples indicates that this method not only obtains a more significant efficiency compared with the existing methods, but also can be widely applied in biochemical reaction systems with delays.
Wang, Zhiteng; Zhang, Hongjun; Zhang, Rui; Li, Yong; Zhang, Xuliang
2014-01-01
Service oriented modeling and simulation are hot issues in the field of modeling and simulation, and there is need to call service resources when simulation task workflow is running. How to optimize the service resource allocation to ensure that the task is complete effectively is an important issue in this area. In military modeling and simulation field, it is important to improve the probability of success and timeliness in simulation task workflow. Therefore, this paper proposes an optimization algorithm for multipath service resource parallel allocation, in which multipath service resource parallel allocation model is built and multiple chains coding scheme quantum optimization algorithm is used for optimization and solution. The multiple chains coding scheme quantum optimization algorithm is to extend parallel search space to improve search efficiency. Through the simulation experiment, this paper investigates the effect for the probability of success in simulation task workflow from different optimization algorithm, service allocation strategy, and path number, and the simulation result shows that the optimization algorithm for multipath service resource parallel allocation is an effective method to improve the probability of success and timeliness in simulation task workflow.
Analogue Simulation and Orbital Solving Algorithm of Astrometric Exoplanet Detection
NASA Astrophysics Data System (ADS)
Huang, P. H.; Ji, J. H.
2016-09-01
Astrometry is an effective method to detect exoplanets. It has many advantages that other detection methods do not bear, such as providing three dimensional planetary orbit and determining the planetary mass. Astrometry will enrich the sample of exoplanets. As the high-precision astrometric satellite Gaia (Global Astrometry interferometer for Astrophysics) was launched in 2013, there will be abundant long-period Jupiter-size planets to be discovered by Gaia. In this paper, we specify the α Centauri A, HD 62509, and GJ 876 systems, and generate the synthetic astrometric data with the single astrometric precision of Gaia. Then we use the Lomb-Scargle periodogram to analyse the signature of planets and the Markov Chain Monte Carlo (MCMC) algorithm to fit the orbit of planets. The simulation results are well coincide with the initial solutions.
Thermal Performance Simulation of MWNT/NR composites Based on Levenberg-Marquard Algorithm
NASA Astrophysics Data System (ADS)
Yu, Z. Z.; Liu, J. S.
2017-02-01
In this paper, Levenberg-Marquard algorithm was used to simulate thermal performance of aligned carbon nanotubes-filled rubber composite, and the effect of temperature, filling amount, MWNTs orientation and other factors on thermal performance were studied. The research results showed: MWNTs orientation can greatly improve the thermal conductivity of composite materials, the thermal performance improvement of overall orientation was higher than the local orientation. Volume fraction can affect thermal performance, thermal conductivity increased with the increase of volume fraction. Temperature had no significant effect on the thermal conductivity. The simulation results correlated well with experimental results, which showed that the simulation algorithm is effective and feasible.
Preliminary results from the ASF/GPS ice classification algorithm
NASA Technical Reports Server (NTRS)
Cunningham, G.; Kwok, R.; Holt, B.
1992-01-01
The European Space Agency Remote Sensing Satellite (ERS-1) satellite carried a C-band synthetic aperture radar (SAR) to study the earth's polar regions. The radar returns from sea ice can be used to infer properties of ice, including ice type. An algorithm has been developed for the Alaska SAR facility (ASF)/Geophysical Processor System (GPS) to infer ice type from the SAR observations over sea ice and open water. The algorithm utilizes look-up tables containing expected backscatter values from various ice types. An analysis has been made of two overlapping strips with 14 SAR images. The backscatter values of specific ice regions were sampled to study the backscatter characteristics of the ice in time and space. Results show both stability of the backscatter values in time and a good separation of multiyear and first-year ice signals, verifying the approach used in the classification algorithm.
Evaluation of registration, compression and classification algorithms. Volume 1: Results
NASA Technical Reports Server (NTRS)
Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.
1979-01-01
The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.
Gotway, C.A.; Rutherford, B.M.
1993-09-01
Stochastic simulation has been suggested as a viable method for characterizing the uncertainty associated with the prediction of a nonlinear function of a spatially-varying parameter. Geostatistical simulation algorithms generate realizations of a random field with specified statistical and geostatistical properties. A nonlinear function is evaluated over each realization to obtain an uncertainty distribution of a system response that reflects the spatial variability and uncertainty in the parameter. Crucial management decisions, such as potential regulatory compliance of proposed nuclear waste facilities and optimal allocation of resources in environmental remediation, are based on the resulting system response uncertainty distribution. Many geostatistical simulation algorithms have been developed to generate the random fields, and each algorithm will produce fields with different statistical properties. These different properties will result in different distributions for system response, and potentially, different managerial decisions. The statistical properties of the resulting system response distributions are not completely understood, nor is the ability of the various algorithms to generate response distributions that adequately reflect the associated uncertainty. This paper reviews several of the algorithms available for generating random fields. Algorithms are compared in a designed experiment using seven exhaustive data sets with different statistical and geostatistical properties. For each exhaustive data set, a number of realizations are generated using each simulation algorithm. The realizations are used with each of several deterministic transfer functions to produce a cumulative uncertainty distribution function of a system response. The uncertainty distributions are then compared to the single value obtained from the corresponding exhaustive data set.
Forced detection Monte Carlo algorithms for accelerated blood vessel image simulations.
Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas
2009-03-01
Two forced detection (FD) variance reduction Monte Carlo algorithms for image simulations of tissue-embedded objects with matched refractive index are presented. The principle of the algorithms is to force a fraction of the photon weight to the detector at each and every scattering event. The fractional weight is given by the probability for the photon to reach the detector without further interactions. Two imaging setups are applied to a tissue model including blood vessels, where the FD algorithms produce identical results as traditional brute force simulations, while being accelerated with two orders of magnitude. Extending the methods to include refraction mismatches is discussed.
Algorithms for Model Calibration of Ground Water Simulators
2014-11-20
cobian, and Jacobian-vector products are computed with a Monte Carlo simulation. This situation differs from the textbook case [5] in that one does not...Anderson acceleration is a natural method for multi-physics coupling (for example subsurface flow, chemistry , and heat transfer) when the individual physics...Online publication 7/12/2014. [11] J. Nance and C. T. Kelley, A sparse interpolation algorithm for dynamical simulations in compu- tational chemistry
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650
Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.
NASA Astrophysics Data System (ADS)
Elliott, William Dewey
1995-01-01
A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over
Fast stochastic algorithm for simulating evolutionary population dynamics
NASA Astrophysics Data System (ADS)
Tsimring, Lev; Hasty, Jeff; Mather, William
2012-02-01
Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.
Ventricular Fibrillation in Mammalian Hearts: Simulation Results
NASA Astrophysics Data System (ADS)
Fenton, Flavio H.
2002-03-01
The computational approach to understanding the initiation and evolution of cardiac arrhythmias forms a necessary link between experiment and theory. Numerical simulations combine useful mathematical models and complex geometry while offering clean and comprehensive data acquisition, reproducible results that can be compared to experiments, and the flexibility of exploring parameter space systematically. However, because cardiac dynamics occurs on many scales (on the order of 10^9 cells of size 10-100 microns with more than 40 ionic currents and time scales as fast as 0.01ms), roughly 10^17 operations are required to simulate just one second of real time. These intense computational requirements lead to significant implementation challenges even on existing supercomputers. Nevertheless, progress over the last decade in understanding the effects of some spatial scales and spatio-temporal dynamics on cardiac cell and tissue behavior justifies the use of certain simplifications which, along with improved models for cellular dynamics and detailed digital models of cardiac anatomy, are allowing simulation studies of full-size ventricles and atria. We describe this simulation problem from a combined numerical, physical and biological point of view, with an emphasis on the dynamics and stability of scroll waves of electrical activity in mammalian hearts and their relation to tachycardia, fibrillation and sudden death. Detailed simulations of electrical activity in ventricles including complex anatomy, anisotropic fiber structure, and electrophysiological effects of two drugs (DAM and CytoD) are presented and compared with experimental results.
Titan's organic chemistry: Results of simulation experiments
NASA Technical Reports Server (NTRS)
Sagan, Carl; Thompson, W. Reid; Khare, Bishun N.
1992-01-01
Recent low pressure continuous low plasma discharge simulations of the auroral electron driven organic chemistry in Titan's mesosphere are reviewed. These simulations yielded results in good accord with Voyager observations of gas phase organic species. Optical constants of the brownish solid tholins produced in similar experiments are in good accord with Voyager observations of the Titan haze. Titan tholins are rich in prebiotic organic constituents; the Huygens entry probe may shed light on some of the processes that led to the origin of life on Earth.
A global optimization algorithm for simulation-based problems via the extended DIRECT scheme
NASA Astrophysics Data System (ADS)
Liu, Haitao; Xu, Shengli; Wang, Xiaofang; Wu, Junnan; Song, Yang
2015-11-01
This article presents a global optimization algorithm via the extension of the DIviding RECTangles (DIRECT) scheme to handle problems with computationally expensive simulations efficiently. The new optimization strategy improves the regular partition scheme of DIRECT to a flexible irregular partition scheme in order to utilize information from irregular points. The metamodelling technique is introduced to work with the flexible partition scheme to speed up the convergence, which is meaningful for simulation-based problems. Comparative results on eight representative benchmark problems and an engineering application with some existing global optimization algorithms indicate that the proposed global optimization strategy is promising for simulation-based problems in terms of efficiency and accuracy.
A novel wavefront-based algorithm for numerical simulation of quasi-optical systems
NASA Astrophysics Data System (ADS)
Zhang, Xiaoling; Lou, Zheng; Hu, Jie; Zhou, Kangmin; Zuo, Yingxi; Shi, Shengcai
2016-11-01
A novel wavefront-based algorithm for the beam simulation of both reflective and refractive optics in a complicated quasi-optical system is proposed. The algorithm can be regarded as the extension to the conventional Physical Optics algorithm to handle dielectrics. Internal reflections are modeled in an accurate fashion, and coating and flossy materials can be treated in a straightforward manner. A parallel implementation of the algorithm has been developed and numerical examples show that the algorithm yields sufficient accuracy by comparing with experimental results, while the computational complexity is much less than the full-wave methods. The algorithm offers an alternative approach to the modeling of quasi-optical systems in addition to the Geometrical Optics modeling and full-wave methods.
Understanding disordered systems through numerical simulation and algorithm development
NASA Astrophysics Data System (ADS)
Sweeney, Sean Michael
Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising
A performance comparison of integration algorithms in simulating flexible structures
NASA Technical Reports Server (NTRS)
Howe, R. M.
1989-01-01
Asymptotic formulas for the characteristic root errors as well as transfer function gain and phase errors are presented for a number of traditional and new integration methods. Normalized stability regions in the lambda h plane are compared for the various methods. In particular, it is shown that a modified form of Euler integration with root matching is an especially efficient method for simulating lightly-damped structural modes. The method has been used successfully for structural bending modes in the real-time simulation of missiles. Performance of this algorithm is compared with other special algorithms, including the state-transition method. A predictor-corrector version of the modified Euler algorithm permits it to be extended to the simulation of nonlinear models of the type likely to be obtained when using the discretized structure approach. Performance of the different integration methods is also compared for integration step sizes larger than those for which the asymptotic formulas are valid. It is concluded that many traditional integration methods, such as RD-4, are not competitive in the simulation of lightly damped structures.
Time parallelization of plasma simulations using the parareal algorithm
Samaddar, D.; Houlberg, Wayne A; Berry, Lee A; Elwasif, Wael R; Huysmans, G; Batchelor, Donald B
2011-01-01
Simulation of fusion plasmas involve a broad range of timescales. In magnetically confined plasmas, such as in ITER, the timescale associated with the microturbulence responsible for transport and confinement timescales vary by an order of 10^6 10^9. Simulating this entire range of timescales is currently impossible, even on the most powerful supercomputers available. Space parallelization has so far been the most common approach to solve partial differential equations. Space parallelization alone has led to computational saturation for fluid codes, which means that the walltime for computaion does not linearly decrease with the increasing number of processors used. The application of the parareal algorithm to simulations of fusion plasmas ushers in a new avenue of parallelization, namely temporal parallelization. The algorithm has been successfully applied to plasma turbulence simulations, prior to which it has been applied to other relatively simpler problems. This work explores the extension of the applicability of the parareal algorithm to ITER relevant problems, starting with a diffusion-convection model.
Two-Dimensional Inlet Simulation Using a Diagonal Implicit Algorithm
NASA Technical Reports Server (NTRS)
Chaussee, D.S.; Pulliam, T. H.
1981-01-01
A modification of an implicit approximate-factorization finite-difference algorithm applied to the two-dimensional Euler and Navier-Stokes equations in general curvilinear coordinates is presented for supersonic freestream flow about and through inlets. The modification transforms the coupled system of equations Into an uncoupled diagonal form which requires less computation work. For steady-state applications the resulting diagonal algorithm retains the stability and accuracy characteristics of the original algorithm. Solutions are given for inviscid and laminar flow about a two-dimensional wedge inlet configuration. Comparisons are made between computed results and exact theory.
The Effect of Pansharpening Algorithms on the Resulting Orthoimagery
NASA Astrophysics Data System (ADS)
Agrafiotis, P.; Georgopoulos, A.; Karantzalos, K.
2016-06-01
This paper evaluates the geometric effects of pansharpening algorithms on automatically generated DSMs and thus on the resulting orthoimagery through a quantitative assessment of the accuracy on the end products. The main motivation was based on the fact that for automatically generated Digital Surface Models, an image correlation step is employed for extracting correspondences between the overlapping images. Thus their accuracy and reliability is strictly related to image quality, while pansharpening may result into lower image quality which may affect the DSM generation and the resulting orthoimage accuracy. To this direction, an iterative methodology was applied in order to combine the process described by Agrafiotis and Georgopoulos (2015) with different pansharpening algorithms and check the accuracy of orthoimagery resulting from pansharpened data. Results are thoroughly examined and statistically analysed. The overall evaluation indicated that the pansharpening process didn't affect the geometric accuracy of the resulting DSM with a 10m interval, as well as the resulting orthoimagery. Although some residuals in the orthoimages were observed, their magnitude cannot adversely affect the accuracy of the final orthoimagery.
A fast 3D image simulation algorithm of moving target for scanning laser radar
NASA Astrophysics Data System (ADS)
Li, Jicheng; Shi, Zhiguang; Chen, Xiao; Chen, Dong
2014-10-01
Scanning Laser Radar has been widely used in many military and civil areas. Usually there are relative movements between the target and the radar, so the moving target image modeling and simulation is an important research content in the field of signal processing and system design of scan-imaging laser radar. In order to improve the simulation speed and hold the accuracy of the image simulation simultaneously, a novel fast simulation algorithm is proposed in this paper. Firstly, for moving target or varying scene, an inequation that can judge the intersection relations between the pixel and target bins is obtained by deriving the projection of target motion trajectories on the image plane. Then, by utilizing the time subdivision and approximate treatments, the potential intersection relations of pixel and target bins are determined. Finally, the goal of reducing the number of intersection operations could be achieved by testing all the potential relations and finding which of them is real intersection. To test the method's performance, we perform computer simulations of both the new proposed algorithm and a literature's algorithm for six targets. The simulation results show that the two algorithm yield the same imaging result, whereas the number of intersection operations of former is equivalent to only 1% of the latter, and the calculation efficiency increases a hundredfold. The novel simulation acceleration idea can be applied extensively in other more complex application environments and provide equally acceleration effect. It is very suitable for the case to produce a great large number of laser radar images.
Molecular dynamics algorithm enforcing energy conservation for microcanonical simulations.
Salueña, Clara; Avalos, Josep Bonet
2014-05-01
A reversible algorithm [enforced energy conservation (EEC)] that enforces total energy conservation for microcanonical simulations is presented. The key point is the introduction of the discrete-gradient method to define the forces from the conservative potentials, instead of the direct use of the force field at the actual position of the particle. We have studied the performance and accuracy of the EEC in two cases, namely Lennard-Jones fluid and a simple electrolyte model. Truncated potentials that usually induce inaccuracies in energy conservation are used. In particular, the reaction field approach is used in the latter. The EEC is able to preserve energy conservation for a long time, and, in addition, it performs better than the Verlet algorithm for these kinds of simulations.
Improved Contact Algorithms for Implicit FE Simulation of Sheet Forming
NASA Astrophysics Data System (ADS)
Zhuang, S.; Lee, M. G.; Keum, Y. T.; Wagoner, R. H.
2007-05-01
Implicit finite element simulations of sheet forming processes do not always converge, particularly for complex tool geometries and rapidly changing contact. The SHEET-3 program exhibits remarkable stability and strong convergence by use of its special N-CFS algorithm and a sheet normal defined by the mesh, but these features alone do not always guarantee convergence and accuracy. An improved contact capability within the N-CFS algorithm is formulated taking into account sheet thickness within the framework of shell elements. Two imaginary surfaces offset from the mid-plane of shell elements are implemented along the mesh normal direction. An efficient contact searching algorithm based on the mesh-patch tool description is formulated along the mesh normal direction. The contact search includes a general global searching procedure and a new local searching procedure enforcing the contact condition along the mesh normal direction. The processes of unconstrained cylindrical bending and drawing through a drawbead are simulated to verify the accuracy and convergence of the improved contact algorithm.
Predicting patchy particle crystals: Variable box shape simulations and evolutionary algorithms
NASA Astrophysics Data System (ADS)
Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard
2012-06-01
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
Xiu, Dongbin
2016-06-21
The focus of the project is the development of mathematical methods and high-performance com- putational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly e cient and scalable numer- ical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
SMMR Simulator radiative transfer calibration model. 2: Algorithm development
NASA Technical Reports Server (NTRS)
Link, S.; Calhoon, C.; Krupp, B.
1980-01-01
Passive microwave measurements performed from Earth orbit can be used to provide global data on a wide range of geophysical and meteorological phenomena. A Scanning Multichannel Microwave Radiometer (SMMR) is being flown on the Nimbus-G satellite. The SMMR Simulator duplicates the frequency bands utilized in the spacecraft instruments through an amalgamate of radiometer systems. The algorithm developed utilizes data from the fall 1978 NASA CV-990 Nimbus-G underflight test series and subsequent laboratory testing.
Sampling of general correlators in worm-algorithm based simulations
NASA Astrophysics Data System (ADS)
Rindlisbacher, Tobias; Åkerlund, Oscar; de Forcrand, Philippe
2016-08-01
Using the complex ϕ4-model as a prototype for a system which is simulated by a worm algorithm, we show that not only the charged correlator <ϕ* (x) ϕ (y) >, but also more general correlators such as < | ϕ (x) | | ϕ (y) | > or < arg (ϕ (x)) arg (ϕ (y)) >, as well as condensates like < | ϕ | >, can be measured at every step of the Monte Carlo evolution of the worm instead of on closed-worm configurations only. The method generalizes straightforwardly to other systems simulated by worms, such as spin or sigma models.
Numerical simulations of catastrophic disruption: Recent results
NASA Technical Reports Server (NTRS)
Benz, W.; Asphaug, E.; Ryan, E. V.
1994-01-01
Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.
Reliable prediction of adsorption isotherms via genetic algorithm molecular simulation.
LoftiKatooli, L; Shahsavand, A
2017-01-01
Conventional molecular simulation techniques such as grand canonical Monte Carlo (GCMC) strictly rely on purely random search inside the simulation box for predicting the adsorption isotherms. This blind search is usually extremely time demanding for providing a faithful approximation of the real isotherm and in some cases may lead to non-optimal solutions. A novel approach is presented in this article which does not use any of the classical steps of the standard GCMC method, such as displacement, insertation, and removal. The new approach is based on the well-known genetic algorithm to find the optimal configuration for adsorption of any adsorbate on a structured adsorbent under prevailing pressure and temperature. The proposed approach considers the molecular simulation problem as a global optimization challenge. A detailed flow chart of our so-called genetic algorithm molecular simulation (GAMS) method is presented, which is entirely different from traditions molecular simulation approaches. Three real case studies (for adsorption of CO2 and H2 over various zeolites) are borrowed from literature to clearly illustrate the superior performances of the proposed method over the standard GCMC technique. For the present method, the average absolute values of percentage errors are around 11% (RHO-H2), 5% (CHA-CO2), and 16% (BEA-CO2), while they were about 70%, 15%, and 40% for the standard GCMC technique, respectively.
An improved sink particle algorithm for SPH simulations
NASA Astrophysics Data System (ADS)
Hubber, D. A.; Walch, S.; Whitworth, A. P.
2013-04-01
Numerical simulations of star formation frequently rely on the implementation of sink particles: (a) to avoid expending computational resource on the detailed internal physics of individual collapsing protostars, (b) to derive mass functions, binary statistics and clustering kinematics (and hence to make comparisons with observation), and (c) to model radiative and mechanical feedback; sink particles are also used in other contexts, for example to represent accreting black holes in galactic nuclei. We present a new algorithm for creating and evolving sink particles in smoothed particle hydrodynamic (SPH) simulations, which appears to represent a significant improvement over existing algorithms - particularly in situations where sinks are introduced after the gas has become optically thick to its own cooling radiation and started to heat up by adiabatic compression. (i) It avoids spurious creation of sinks. (ii) It regulates the accretion of matter on to a sink so as to mitigate non-physical perturbations in the vicinity of the sink. (iii) Sinks accrete matter, but the associated angular momentum is transferred back to the surrounding medium. With the new algorithm - and modulo the need to invoke sufficient resolution to capture the physics preceding sink formation - the properties of sinks formed in simulations are essentially independent of the user-defined parameters of sink creation, or the number of SPH particles used.
Photovoltaic-electrolyzer system transient simulation results
Leigh, R.W.; Metz, P.D.; Michalek, K.
1986-05-01
Brookhaven National Laboratory has developed a Hydrogen Technology Evaluation Center to illustrate advanced hydrogen technology. The first phase of this effort investigated the use of solar energy to produce hydrogen from water via photovoltaic-powered electrolysis. A coordinated program of system testing, computer simulation, and economic analysis has been adopted to characterize and optimize the photovoltaic-electrolyzer system. This paper presents the initial transient simulation results. Innovative features of the modeling include the use of real weather data, detailed hourly modeling of thermal characteristics of the PV array and of system control strategies, and examination of systems over a wide range of power and voltage ratings. The transient simulation system TRNSYS was used, incorporating existing, modified or new component subroutines as required. For directly coupled systems, the authors found the PV array voltage which maximizes hydrogen production to be quite near the nominal electrolyzer voltage for a wide range of PV array powers. The array voltage which maximizes excess electricity production is slightly higher. The use of an ideal (100 percent efficient) maximum power tracking system provides only a six percent increase in annual hydrogen production. An examination of the effect of the PV array tilt indicates, as expected, that annual hydrogen production is insensitive to tilt angle within +-20 deg of latitude. Summer production greatly exceeds winter generation. Tilting the array, even to 90 deg, produces no significant increase in winter hydrogen production.
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-04-25
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-04-25
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.
An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm
Donev, A; Garcia, A L; Alder, B J
2007-07-30
A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.
Massively parallel algorithms for trace-driven cache simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.
1991-01-01
Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.
Fast Plasma Instrument for MMS: Simulation Results
NASA Technical Reports Server (NTRS)
Figueroa-Vinas, Adolfo; Adrian, Mark L.; Lobell, James V.; Simpson, David G.; Barrie, Alex; Winkert, George E.; Yeh, Pen-Shu; Moore, Thomas E.
2008-01-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. The Dual Electron Spectrometer (DES) of the Fast Plasma Instrument (FPI) for MMS meets these demanding requirements by acquiring the electron velocity distribution functions (VDFs) for the full sky with high-resolution angular measurements every 30 ms. This will provide unprecedented access to electron scale dynamics within the reconnection diffusion region. The DES consists of eight half-top-hat energy analyzers. Each analyzer has a 6 deg. x 11.25 deg. Full-sky coverage is achieved by electrostatically stepping the FOV of each of the eight sensors through four discrete deflection look directions. Data compression and burst memory management will provide approximately 30 minutes of high time resolution data during each orbit of the four MMS spacecraft. Each spacecraft will intelligently downlink the data sequences that contain the greatest amount of temporal structure. Here we present the results of a simulation of the DES analyzer measurements, data compression and decompression, as well as ground-based analysis using as a seed re-processed Cluster/PEACE electron measurements. The Cluster/PEACE electron measurements have been reprocessed through virtual DES analyzers with their proper geometrical, energy, and timing scale factors and re-mapped via interpolation to the DES angular and energy phase-space sampling measurements. The results of the simulated DES measurements are analyzed and the full moments of the simulated VDFs are compared with those obtained from the Cluster/PEACE spectrometer using a standard quadrature moment, a newly implemented spectral spherical harmonic method, and a singular value decomposition method. Our preliminary moment calculations show a remarkable agreement within the uncertainties of the measurements, with the
NASA Astrophysics Data System (ADS)
O'Malley, Peter; Babbush, Ryan; Kivlichan, Ian; Romero, Jhonathan; McClean, Jarrod; Tranter, Andrew; Barends, Rami; Kelly, Julian; Chen, Yu; Chen, Zijun; Jeffrey, Evan; Fowler, Austin; Megrant, Anthony; Mutus, Josh; Neill, Charles; Quintana, Christopher; Roushan, Pedram; Sank, Daniel; Vainsencher, Amit; Wenner, James; White, Theodore; Love, Peter; Aspuru-Guzik, Alan; Neven, Hartmut; Martinis, John
Quantum simulations of molecules have the potential to calculate industrially-important chemical parameters beyond the reach of classical methods with relatively modest quantum resources. Recent years have seen dramatic progress both superconducting qubits and quantum chemistry algorithms. Here, we present experimental demonstrations of two fully-scalable algorithms for finding the dissociation energy of hydrogen: the variational quantum eigensolver and iterative phase estimation. This represents the first calculation of a dissociation energy to chemical accuracy with a non-precompiled algorithm. These results show the promise of chemistry as the ``killer app'' for quantum computers, even before the advent of full error-correction.
A parallel simulated annealing algorithm for standard cell placement on a hypercube computer
NASA Technical Reports Server (NTRS)
Jones, Mark Howard
1987-01-01
A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.
An Initial Examination for Verifying Separation Algorithms by Simulation
NASA Technical Reports Server (NTRS)
White, Allan L.; Neogi, Natasha; Herencia-Zapana, Heber
2012-01-01
An open question in algorithms for aircraft is what can be validated by simulation where the simulation shows that the probability of undesirable events is below some given level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first proposes a goal based on the number of flights per year in several regions. The paper examines the probabilistic interpretation of this goal and computes the number of trials needed to establish it at an equivalent confidence level. Since any simulation is likely to consider the algorithms for only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. This paper is an initial effort, and as such, it considers separation maneuvers, which are elementary but include numerous aspects of aircraft behavior. The scenario includes decisions under uncertainty since the position of each aircraft is only known to the other by broadcasting where GPS believes each aircraft to be (ADS-B). Each aircraft operates under feedback control with perturbations. It is shown that a scenario three or four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.
NASA Technical Reports Server (NTRS)
Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret
1992-01-01
Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.
Simulation Results Related to Stochastic Electrodynamics
NASA Astrophysics Data System (ADS)
Cole, Daniel C.
2006-01-01
Stochastic electrodynamics (SED) is a classical theory of nature advanced significantly in the 1960s by Trevor Marshall and Timothy Boyer. Since then, SED has continued to be investigated by a very small group of physicists. Early investigations seemed promising, as SED was shown to agree with quantum mechanics (QM) and quantum electrodynamics (QED) for a few linear systems. In particular, agreement was found for the simple harmonic electric dipole oscillator, physical systems composed of such oscillators and interacting electromagnetically, and free electromagnetic fields with boundary conditions imposed such as would enter into Casimir-type force calculations. These results were found to hold for both zero-point and non-zero temperature conditions. However, by the late 1970s and then into the early 1980s, researchers found that when investigating nonlinear systems, SED did not appear to provide agreement with the predictions of QM and QED. A proposed reason for this disagreement was advocated by Boyer and Cole that such nonlinear systems are not sufficiently realistic for describing atomic and molecular physical systems, which should be fundamentally based on the Coulombic binding potential. Analytic attempts on these systems have proven to be most difficult. Consequently, in recent years more attention has been placed on numerically simulating the interaction of a classical electron in a Coulombic binding potential, with classical electromagnetic radiation acting on the classical electron. Good agreement was found for this numerical simulation work as compared with predictions from QM. Here this worked is reviewed and possible directions are discussed. Recent simulation work involving subharmonic resonances for the classical hydrogen atom is also discussed; some of the properties of these subharmonic resonances seem quite interesting and unusual.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Banerjee, Prithviraj; Jones, Mark Howard; Sargent, Jeff S.
1990-01-01
Two parallel algorithms for standard cell placement using simulated annealing are developed to run on distributed-memory message-passing hypercube multiprocessors. The cells can be mapped in a two-dimensional area of a chip onto processors in an n-dimensional hypercube in two ways, such that both small and large cell exchange and displacement moves can be applied. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support the parallel cost evaluation. A novel tree broadcasting strategy is used extensively for updating cell locations in the parallel environment. A dynamic parallel annealing schedule estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control.
Medical Simulation Practices 2010 Survey Results
NASA Technical Reports Server (NTRS)
McCrindle, Jeffrey J.
2011-01-01
Medical Simulation Centers are an essential component of our learning infrastructure to prepare doctors and nurses for their careers. Unlike the military and aerospace simulation industry, very little has been published regarding the best practices currently in use within medical simulation centers. This survey attempts to provide insight into the current simulation practices at medical schools, hospitals, university nursing programs and community college nursing programs. Students within the MBA program at Saint Joseph's University conducted a survey of medical simulation practices during the summer 2010 semester. A total of 115 institutions responded to the survey. The survey resus discuss overall effectiveness of current simulation centers as well as the tools and techniques used to conduct the simulation activity
Concurrent Algorithm For Particle-In-Cell Simulations
NASA Technical Reports Server (NTRS)
Liewer, Paulett C.; Decyk, Viktor K.
1990-01-01
Separate decompositions used for particle-motion and field calculations. General Concurrent Particle-in-Cell (GCPIC) algorithm used to implement motions of individual plasma particles (ions and electrons) under influence of particle-in-cell (PIC) computer codes on concurrent processors. Simulates motions of individual plasma particles under influence of electromagnetic fields generated by particles themselves. Performed to study variety of nonlinear problems in plasma physics, including magnetic and inertial fusion, plasmas in outer space, propagation of electron and ion beams, free-electron lasers, and particle accelerators.
A gene network simulator to assess reverse engineering algorithms.
Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio
2009-03-01
In the context of reverse engineering of biological networks, simulators are helpful to test and compare the accuracy of different reverse-engineering approaches in a variety of experimental conditions. A novel gene-network simulator is presented that resembles some of the main features of transcriptional regulatory networks related to topology, interaction among regulators of transcription, and expression dynamics. The simulator generates network topology according to the current knowledge of biological network organization, including scale-free distribution of the connectivity and clustering coefficient independent of the number of nodes in the network. It uses fuzzy logic to represent interactions among the regulators of each gene, integrated with differential equations to generate continuous data, comparable to real data for variety and dynamic complexity. Finally, the simulator accounts for saturation in the response to regulation and transcription activation thresholds and shows robustness to perturbations. It therefore provides a reliable and versatile test bed for reverse engineering algorithms applied to microarray data. Since the simulator describes regulatory interactions and expression dynamics as two distinct, although interconnected aspects of regulation, it can also be used to test reverse engineering approaches that use both microarray and protein-protein interaction data in the process of learning. A first software release is available at http://www.dei.unipd.it/~dicamill/software/netsim as an R programming language package.
Roland, M; Tjardes, T; Otchwemah, R; Bouillon, B; Diebels, S
2015-04-13
An algorithmic strategy to determine the minimal fusion area of a tibia pseudarthrosis to achieve mechanical stability is presented. For this purpose, a workflow capable for implementation into clinical routine workup of tibia pseudarthrosis was developed using visual computing algorithms for image segmentation, that is a coarsening protocol to reduce computational effort resulting in an individualized volume-mesh based on computed tomography data. An algorithm detecting the minimal amount of fracture union necessary to allow physiological loading without subjecting the implant to stresses and strains that might result in implant failure is developed. The feasibility of the algorithm in terms of computational effort is demonstrated. Numerical finite element simulations show that the minimal fusion area of a tibia pseudarthrosis can be less than 90% of the full circumferential area given a defined maximal von Mises stress in the implant of 80% of the total stress arising in a complete pseudarthrosis of the tibia.
NASA Astrophysics Data System (ADS)
Liu, Bing-Yi; Wang, Jun-Yang; Liu, Zhi-Shen
2014-11-01
Spaceborne integrated path differential absorption (IPDA) lidar is an active-detection system which is able to perform global CO2 measurement with high accuracy of 1ppmv at day and night over ground and clouds. To evaluate the detection performance of the system, simulation of the ground return signal and retrieval algorithm for CO2 concentration are presented in this paper. Ground return signals of spaceborne IPDA lidar under various ground surface reflectivity and atmospheric aerosol optical depths are simulated using given system parameters, standard atmosphere profiles and HITRAN database, which can be used as reference for determining system parameters. The simulated signals are further applied to the research on retrieval algorithm for CO2 concentration. The column-weighted dry air mixing ratio of CO2 denoted by XCO2 is obtained. As the deviations of XCO2 between the initial values for simulation and the results from retrieval algorithm are within the expected error ranges, it is proved that the simulation and retrieval algorithm are reliable.
NASA Astrophysics Data System (ADS)
Chen, Zaigao; Wang, Jianguo; Wang, Yue; Qiao, Hailiang; Zhang, Dianhui; Guo, Weijie
2013-11-01
Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.
Chen, Zaigao; Wang, Jianguo; Wang, Yue; Qiao, Hailiang; Zhang, Dianhui; Guo, Weijie
2013-11-15
Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.
Planck 2015 results. XII. Full focal plane simulations
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Castex, G.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Karakci, A.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Welikala, N.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
We present the 8th full focal plane simulation set (FFP8), deployed in support of the Planck 2015 results. FFP8 consists of 10 fiducial mission realizations reduced to 18 144 maps, together with the most massive suite of Monte Carlo realizations of instrument noise and CMB ever generated, comprising 104 mission realizations reduced to about 106 maps. The resulting maps incorporate the dominant instrumental, scanning, and data analysis effects, and the remaining subdominant effects will be included in future updates. Generated at a cost of some 25 million CPU-hours spread across multiple high-performance-computing (HPC) platforms, FFP8 is used to validate and verify analysis algorithms and their implementations, and to remove biases from and quantify uncertainties in the results of analyses of the real data.
A method for data handling numerical results in parallel OpenFOAM simulations
Anton, Alin; Muntean, Sebastian
2015-12-31
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Sensitivity of CO2 Simulation in a GCM to the Convective Transport Algorithms
NASA Technical Reports Server (NTRS)
Zhu, Z.; Pawson, S.; Collatz, G. J.; Gregg, W. W.; Kawa, S. R.; Baker, D.; Ott, L.
2014-01-01
Convection plays an important role in the transport of heat, moisture and trace gases. In this study, we simulated CO2 concentrations with an atmospheric general circulation model (GCM). Three different convective transport algorithms were used. One is a modified Arakawa-Shubert scheme that was native to the GCM; two others used in two off-line chemical transport models (CTMs) were added to the GCM here for comparison purposes. Advanced CO2 surfaced fluxes were used for the simulations. The results were compared to a large quantity of CO2 observation data. We find that the simulation results are sensitive to the convective transport algorithms. Overall, the three simulations are quite realistic and similar to each other in the remote marine regions, but are significantly different in some land regions with strong fluxes such as Amazon and Siberia during the convection seasons. Large biases against CO2 measurements are found in these regions in the control run, which uses the original GCM. The simulation with the simple diffusive algorithm is better. The difference of the two simulations is related to the very different convective transport speed.
Brunner, Thomas A.; Kalos, Malvin H.; Gentile, Nicholas A.
2005-03-01
Domain decomposed Monte Carlo codes, like other domain-decomposed codes, are difficult to debug. Domain decomposition is prone to error, and interactions between the domain decomposition code and the rest of the algorithm often produces subtle bugs. These bugs are particularly difficult to find in a Monte Carlo algorithm, in which the results have statistical noise. Variations in the results due to statistical noise can mask errors when comparing the results to other simulations or analytic results.
Constant-complexity stochastic simulation algorithm with optimal binning
Sanft, Kevin R.; Othmer, Hans G.
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
Constant-complexity stochastic simulation algorithm with optimal binning.
Sanft, Kevin R; Othmer, Hans G
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie's Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
The design and results of an algorithm for intelligent ground vehicles
NASA Astrophysics Data System (ADS)
Duncan, Matthew; Milam, Justin; Tote, Caleb; Riggins, Robert N.
2010-01-01
This paper addresses the design, design method, test platform, and test results of an algorithm used in autonomous navigation for intelligent vehicles. The Bluefield State College (BSC) team created this algorithm for its 2009 Intelligent Ground Vehicle Competition (IGVC) robot called Anassa V. The BSC robotics team is comprised of undergraduate computer science, engineering technology, marketing students, and one robotics faculty advisor. The team has participated in IGVC since the year 2000. A major part of the design process that the BSC team uses each year for IGVC is a fully documented "Post-IGVC Analysis." Over the nine years since 2000, the lessons the students learned from these analyses have resulted in an ever-improving, highly successful autonomous algorithm. The algorithm employed in Anassa V is a culmination of past successes and new ideas, resulting in Anassa V earning several excellent IGVC 2009 performance awards, including third place overall. The paper will discuss all aspects of the design of this autonomous robotic system, beginning with the design process and ending with test results for both simulation and real environments.
An adaptive multi-level simulation algorithm for stochastic biological systems.
Lester, C; Yates, C A; Giles, M B; Baker, R E
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the
Centroid-Based Document Classification Algorithms: Analysis & Experimental Results
2000-03-06
in terms of zero-one loss (misclassification rate ). Linear Classifiers Linear classifiers [31] are a family of text categorization learning...training set and and a test set. The error rates of algorithms A and B on the test set are recorded. Let p (i)A be the error rate of algorithm A and p(i...B be the error rate of algorithm B during trial i . Then Student’s t test can be computed using the statistic: t = p̄ √ n √ ∑n i=1(p(i)− p̄)2 n−1
Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2008-01-01
Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.
SALTSTONE MATRIX CHARACTERIZATION AND STADIUM SIMULATION RESULTS
Langton, C.
2009-07-30
SIMCO Technologies, Inc. was contracted to evaluate the durability of the saltstone matrix material and to measure saltstone transport properties. This information will be used to: (1) Parameterize the STADIUM{reg_sign} service life code, (2) Predict the leach rate (degradation rate) for the saltstone matrix over 10,000 years using the STADIUM{reg_sign} concrete service life code, and (3) Validate the modeled results by conducting leaching (water immersion) tests. Saltstone durability for this evaluation is limited to changes in the matrix itself and does not include changes in the chemical speciation of the contaminants in the saltstone. This report summarized results obtained to date which include: characterization data for saltstone cured up to 365 days and characterization of saltstone cured for 137 days and immersed in water for 31 days. Chemicals for preparing simulated non-radioactive salt solution were obtained from chemical suppliers. The saltstone slurry was mixed according to directions provided by SRNL. However SIMCO Technologies Inc. personnel made a mistake in the premix proportions. The formulation SIMCO personnel used to prepare saltstone premix was not the reference mix proportions: 45 wt% slag, 45 wt% fly ash, and 10 wt% cement. SIMCO Technologies Inc. personnel used the following proportions: 21 wt% slag, 65 wt% fly ash, and 14 wt% cement. The mistake was acknowledged and new mixes have been prepared and are curing. The results presented in this report are assumed to be conservative since the excessive fly ash was used in the SIMCO saltstone. The SIMCO mixes are low in slag which is very reactive in the caustic salt solution. The impact is that the results presented in this report are expected to be conservative since the samples prepared were deficient in slag and contained excess fly ash. The hydraulic reactivity of slag is about four times that of fly ash so the amount of hydrated binder formed per unit volume in the SIMCO saltstone samples is
Exploring Space Physics Concepts Using Simulation Results
NASA Astrophysics Data System (ADS)
Gross, N. A.
2008-05-01
The Center for Integrated Space Weather Modeling (CISM), a Science and Technology Center (STC) funded by the National Science Foundation, has the goal of developing a suite of integrated physics based computer models of the space environment that can follow the evolution of a space weather event from the Sun to the Earth. In addition to the research goals, CISM is also committed to training the next generation of space weather professionals who are imbued with a system view of space weather. This view should include an understanding of both helio-spheric and geo-space phenomena. To this end, CISM offers a yearly Space Weather Summer School targeted to first year graduate students, although advanced undergraduates and space weather professionals have also attended. This summer school uses a number of innovative pedagogical techniques including devoting each afternoon to a computer lab exercise that use results from research quality simulations and visualization techniques, along with ground based and satellite data to explore concepts introduced during the morning lectures. These labs are suitable for use in wide variety educational settings from formal classroom instruction to outreach programs. The goal of this poster is to outline the goals and content of the lab materials so that instructors may evaluate their potential use in the classroom or other settings.
Verifying Algorithms for Autonomous Aircraft by Simulation Generalities and Example
NASA Technical Reports Server (NTRS)
White, Allan L.
2010-01-01
An open question in Air Traffic Management is what procedures can be validated by simulation where the simulation shows that the probability of undesirable events is below the required level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first examines the probabilistic interpretation of a typical requirement by a regulatory agency and computes the number of trials needed to establish the requirement at an equivalent confidence level. Since any simulation is likely to consider only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. The paper establishes a separation algorithm at the required confidence level where the aircraft operates under feedback control as is subject to perturbations. There is a discussion where it is shown that a scenario three of four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.
R-leaping: accelerating the stochastic simulation algorithm by reaction leaps.
Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros
2006-08-28
A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.
R-leaping: Accelerating the stochastic simulation algorithm by reaction leaps
NASA Astrophysics Data System (ADS)
Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros
2006-08-01
A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.
An improved algorithm of three B-spline curve interpolation and simulation
NASA Astrophysics Data System (ADS)
Zhang, Wanjun; Xu, Dongmei; Meng, Xinhong; Zhang, Feng
2017-03-01
As a key interpolation technique in CNC system machine tool, three B-spline curve interpolator has been proposed to change the drawbacks caused by linear and circular interpolator, Such as interpolation time bigger, three B-spline curves step error are not easy changed,and so on. This paper an improved algorithm of three B-spline curve interpolation and simulation is proposed. By Using MATALAB 7.0 computer soft in three B-spline curve interpolation is developed for verifying the proposed modification algorithm of three B-spline curve interpolation experimentally. The simulation results show that the algorithm is correct; it is consistent with a three B-spline curve interpolation requirements.
On constructing optimistic simulation algorithms for the discrete event system specification
Nutaro, James J
2008-01-01
This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models.
Simulation approach to charge sharing compensation algorithms with experimental cross-check
NASA Astrophysics Data System (ADS)
Krzyżanowska, A.; Deptuch, G.; Maj, P.; Gryboś, P.; Szczygieł, R.
2017-03-01
Hybrid pixel detectors for X-ray imaging, working in a single photon counting mode, find applications in a variety of fields, such as medical imaging, material science or industry. However, charge sharing, which occurs when a photon hits a detector in the area between two or four pixels, becomes more significant with decreasing pixel size. If the charge generated when a photon interacts with a detector is collected by more than one pixel, the photon energy and the event position may be improperly detected. Therefore, algorithms for minimization of the impact of charge sharing on a pixel detector for X-ray detection need to be implemented. Firstly, such algorithms must be assessed on a simulation level. The goal is to implement the simulations in such a way that the simulation accuracy and simulation time are optimized. A model should be flexible enough so that it can be quickly adapted for other uses. We propose behavioral models implemented in the Cadence® Virtuoso® environment. This is a solution that enables fast validation of the system at the higher level of abstraction allowing deep verification. A readout channel of a chip is represented using parameterized behavioral blocks of different functionality, such as, a charge sensitive amplifier, shapers, discriminators, comparators. The inter-pixel connections are taken into account. This approach enables top-down design and optimization of parameters. The model was implemented in particular to test the C8P1 algorithm used in the Chase Jr. chip, however, due to its modular implementation, it can be easily adjusted to further test of the algorithms. The simulation approach is described and the simulation results are presented together with the experimental data obtained during synchrotron measurements for the Chase Jr. chip with the C8P1 algorithm implemented.
Evaluation of effective-stress-function algorithm for nuclear fuel simulation
Kim, H. C.; Yang, Y. S.; Koo, Y. H.
2013-07-01
In a pressurized water reactor (PWR), the mechanical integrity of nuclear fuel is the most critical issue as it is an important barrier for fission products released into the environment. The integrity of zirconium cladding that surrounds uranium oxide can be threatened during off-normal operation owing to a pellet-cladding mechanical interaction (PCMI). To analyze the fuel and cladding behavior during off-operation, the fuel performance code should calculate an inelastic analysis in two - or three-dimensional calculations. In this paper, the effective stress function (ESF) algorithm based on a two-dimensional FE module has been implemented to simulate the inelastic behavior of the cladding with stability and accuracy. The ESF algorithm solves the governing equations of the inelastic constitutive behavior by calculating the zero of the appropriate effective-stress-function. To verify the accuracy of the ESF algorithm for an inelastic analysis, a code-to-code benchmark was performed using the commercial FE code, ANSYS 13.0. To demonstrate the stability and convergence of the implemented algorithm, the number of iterations in the ESF algorithm was compared with that in a sequential algorithm in the case of an inelastic problem. Consequently, the evaluation results demonstrate that the implemented ESF algorithm improves the efficiency of the computation without a loss of accuracy for an inelastic analysis. (authors)
NASA Technical Reports Server (NTRS)
Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara
2001-01-01
In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
NASA Astrophysics Data System (ADS)
Ajoy, Ashok; Rao, Rama Koteswara; Kumar, Anil; Rungta, Pranaw
2012-03-01
We propose an iterative algorithm to simulate the dynamics generated by any n-qubit Hamiltonian. The simulation entails decomposing the unitary time evolution operator U (unitary) into a product of different time-step unitaries. The algorithm product-decomposes U in a chosen operator basis by identifying a certain symmetry of U that is intimately related to the number of gates in the decomposition. We illustrate the algorithm by first obtaining a polynomial decomposition in the Pauli basis of the n-qubit quantum state transfer unitary by Di Franco [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.101.230502 101, 230502 (2008)] that transports quantum information from one end of a spin chain to the other, and then implement it in nuclear magnetic resonance to demonstrate that the decomposition is experimentally viable. We further experimentally test the resilience of the state transfer to static errors in the coupling parameters of the simulated Hamiltonian. This is done by decomposing and simulating the corresponding imperfect unitaries.
Hybrid Simulated Annealing and Genetic Algorithms for Industrial Production Management Problems
NASA Astrophysics Data System (ADS)
Vasant, Pandian; Barsoum, Nader
2009-08-01
This paper describes the origin and significant contribution on the development of the Hybrid Simulated Annealing and Genetic Algorithms (HSAGA) approach for finding global optimization. HSAGA provide an insight approach to handle in solving complex optimization problems. The method is, the combination of meta-heuristic approaches of Simulated Annealing and novel Genetic Algorithms for solving a non-linear objective function with uncertain technical coefficients in an industrial production management problems. The proposed novel hybrid method is designed to search for global optimal for the non-linear objective function and search for the best feasible solutions of the decision variables. Simulated experiments were carried out rigorously to reflect the advantages of the proposed method. A description of the well developed method and the advanced computational experiment with MATLAB technical tool is presented. An industrial production management optimization problem is solved using HSAGA technique. The results are very much promising.
Magnetic Storm Simulation With Multiple Ion Fluids: Algorithm
NASA Astrophysics Data System (ADS)
Toth, G.; Glocer, A.; Gombosi, T.
2008-12-01
We describe our progress in extending the capabilities of the BATS-R-US MHD code to model multiple ion fluids. We solve the full multiion equations with no assumptions about the relative motion of the ion fluids. We discuss the numerical difficulties and the algorithmic solutions: the use of a total ion fluid in combination with the individual ion fluids, the use of point-implicit source terms with analytic Jacobian, using a simple criterion to separate the single-ion and multiion regions in our magnetosphere applications, and an artificial friction term to limit the relative velocities of the ion fluids to reasonable values. This latter term is used to mimic the effect of two-stream instabilities in a crude manner. The new code is fully integrated into the Space Weather Modeling Framework and it has been coupled with the ionosphere, inner magnetosphere and polar wind models to simulate the May 4 1998 magnetic storm.
Fawley, William M.
2002-03-25
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
NASA Astrophysics Data System (ADS)
Fawley, William M.
2002-07-01
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multidimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
Application of Simulated Annealing and Related Algorithms to TWTA Design
NASA Technical Reports Server (NTRS)
Radke, Eric M.
2004-01-01
Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is
The weirdest SDSS galaxies: results from an outlier detection algorithm
NASA Astrophysics Data System (ADS)
Baron, Dalya; Poznanski, Dovi
2017-03-01
How can we discover objects we did not know existed within the large data sets that now abound in astronomy? We present an outlier detection algorithm that we developed, based on an unsupervised Random Forest. We test the algorithm on more than two million galaxy spectra from the Sloan Digital Sky Survey and examine the 400 galaxies with the highest outlier score. We find objects which have extreme emission line ratios and abnormally strong absorption lines, objects with unusual continua, including extremely reddened galaxies. We find galaxy-galaxy gravitational lenses, double-peaked emission line galaxies and close galaxy pairs. We find galaxies with high ionization lines, galaxies that host supernovae and galaxies with unusual gas kinematics. Only a fraction of the outliers we find were reported by previous studies that used specific and tailored algorithms to find a single class of unusual objects. Our algorithm is general and detects all of these classes, and many more, regardless of what makes them peculiar. It can be executed on imaging, time series and other spectroscopic data, operates well with thousands of features, is not sensitive to missing values and is easily parallelizable.
Simulating Future GPS Clock Scenarios with Two Composite Clock Algorithms
NASA Technical Reports Server (NTRS)
Suess, Matthias; Matsakis, Demetrios; Greenhall, Charles A.
2010-01-01
Using the GPS Toolkit, the GPS constellation is simulated using 31 satellites (SV) and a ground network of 17 monitor stations (MS). At every 15-minutes measurement epoch, the monitor stations measure the time signals of all satellites above a parameterized elevation angle. Once a day, the satellite clock estimates the station and satellite clocks. The first composite clock (B) is based on the Brown algorithm, and is now used by GPS. The second one (G) is based on the Greenhall algorithm. The composite clock of G and B performance are investigated using three ground-clock models. Model C simulates the current GPS configuration, in which all stations are equipped with cesium clocks, except for masers at USNO and Alternate Master Clock (AMC) sites. Model M is an improved situation in which every station is equipped with active hydrogen masers. Finally, Models F and O are future scenarios in which the USNO and AMC stations are equipped with fountain clocks instead of masers. Model F is a rubidium fountain, while Model O is more precise but futuristic Optical Fountain. Each model is evaluated using three performance metrics. The timing-related user range error having all satellites available is the first performance index (PI1). The second performance index (PI2) relates to the stability of the broadcast GPS system time itself. The third performance index (PI3) evaluates the stability of the time scales computed by the two composite clocks. A distinction is made between the "Signal-in-Space" accuracy and that available through a GNSS receiver.
Robotic space simulation integration of vision algorithms into an orbital operations simulation
NASA Technical Reports Server (NTRS)
Bochsler, Daniel C.
1987-01-01
In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.
NASA Astrophysics Data System (ADS)
Tsukahara, Hiroshi; Iwano, Kaoru; Mitsumata, Chiharu; Ishikawa, Tadashi; Ono, Kanta
2016-10-01
We implement low communication frequency three-dimensional fast Fourier transform algorithms on micromagnetics simulator for calculations of a magnetostatic field which occupies a significant portion of large-scale micromagnetics simulation. This fast Fourier transform algorithm reduces the frequency of all-to-all communications from six to two times. Simulation times with our simulator show high scalability in parallelization, even if we perform the micromagnetics simulation using 32 768 physical computing cores. This low communication frequency fast Fourier transform algorithm enables world largest class micromagnetics simulations to be carried out with over one billion calculation cells.
NASA Astrophysics Data System (ADS)
Popov, A.; Zolotarev, V.; Bychkov, S.
2016-11-01
This paper examines the results of experimental studies of a previously submitted combined algorithm designed to increase the reliability of information systems. The data that illustrates the organization and conduct of the studies is provided. Within the framework of a comparison of As a part of the study conducted, the comparison of the experimental data of simulation modeling and the data of the functioning of the real information system was made. The hypothesis of the homogeneity of the logical structure of the information systems was formulated, thus enabling to reconfigure the algorithm presented, - more specifically, to transform it into the model for the analysis and prediction of arbitrary information systems. The results presented can be used for further research in this direction. The data of the opportunity to predict the functioning of the information systems can be used for strategic and economic planning. The algorithm can be used as a means for providing information security.
Experiences with serial and parallel algorithms for channel routing using simulated annealing
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1988-01-01
Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.
Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations
Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer
2013-09-01
Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both
NASA Astrophysics Data System (ADS)
Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad
2016-11-01
In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.
Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.
2008-01-01
This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.
The small-voxel tracking algorithm for simulating chemical reactions among diffusing molecules
NASA Astrophysics Data System (ADS)
Gillespie, Daniel T.; Seitaridou, Effrosyni; Gillespie, Carol A.
2014-12-01
Simulating the evolution of a chemically reacting system using the bimolecular propensity function, as is done by the stochastic simulation algorithm and its reaction-diffusion extension, entails making statistically inspired guesses as to where the reactant molecules are at any given time. Those guesses will be physically justified if the system is dilute and well-mixed in the reactant molecules. Otherwise, an accurate simulation will require the extra effort and expense of keeping track of the positions of the reactant molecules as the system evolves. One molecule-tracking algorithm that pays careful attention to the physics of molecular diffusion is the enhanced Green's function reaction dynamics (eGFRD) of Takahashi, Tănase-Nicola, and ten Wolde [Proc. Natl. Acad. Sci. U.S.A. 107, 2473 (2010)]. We introduce here a molecule-tracking algorithm that has the same theoretical underpinnings and strategic aims as eGFRD, but a different implementation procedure. Called the small-voxel tracking algorithm (SVTA), it combines the well known voxel-hopping method for simulating molecular diffusion with a novel procedure for rectifying the unphysical predictions of the diffusion equation on the small spatiotemporal scale of molecular collisions. Indications are that the SVTA might be more computationally efficient than eGFRD for the problematic class of non-dilute systems. A widely applicable, user-friendly software implementation of the SVTA has yet to be developed, but we exhibit some simple examples which show that the algorithm is computationally feasible and gives plausible results.
GPU-based single-cluster algorithm for the simulation of the Ising model
NASA Astrophysics Data System (ADS)
Komura, Yukihiro; Okabe, Yutaka
2012-02-01
We present the GPU calculation with the common unified device architecture (CUDA) for the Wolff single-cluster algorithm of the Ising model. Proposing an algorithm for a quasi-block synchronization, we realize the Wolff single-cluster Monte Carlo simulation with CUDA. We perform parallel computations for the newly added spins in the growing cluster. As a result, the GPU calculation speed for the two-dimensional Ising model at the critical temperature with the linear size L = 4096 is 5.60 times as fast as the calculation speed on a current CPU core. For the three-dimensional Ising model with the linear size L = 256, the GPU calculation speed is 7.90 times as fast as the CPU calculation speed. The idea of quasi-block synchronization can be used not only in the cluster algorithm but also in many fields where the synchronization of all threads is required.
NASA Astrophysics Data System (ADS)
Saintillan, David; Darve, Eric; Shaqfeh, Eric S. G.
2005-03-01
Large-scale simulations of non-Brownian rigid fibers sedimenting under gravity at zero Reynolds number have been performed using a fast algorithm. The mathematical formulation follows the previous simulations by Butler and Shaqfeh ["Dynamic simulations of the inhomogeneous sedimentation of rigid fibres," J. Fluid Mech. 468, 205 (2002)]. The motion of the fibers is described using slender-body theory, and the line distribution of point forces along their lengths is approximated by a Legendre polynomial in which only the total force, torque, and particle stresslet are retained. Periodic boundary conditions are used to simulate an infinite suspension, and both far-field hydrodynamic interactions and short-range lubrication forces are considered in all simulations. The calculation of the hydrodynamic interactions, which is typically the bottleneck for large systems with periodic boundary conditions, is accelerated using a smooth particle-mesh Ewald (SPME) algorithm previously used in molecular dynamics simulations. In SPME the slowly decaying Green's function is split into two fast-converging sums: the first involves the distribution of point forces and accounts for the singular short-range part of the interactions, while the second is expressed in terms of the Fourier transform of the force distribution and accounts for the smooth and long-range part. Because of its smoothness, the second sum can be computed efficiently on an underlying grid using the fast Fourier transform algorithm, resulting in a significant speed-up of the calculations. Systems of up to 512 fibers were simulated on a single-processor workstation, providing a different insight into the formation, structure, and dynamics of the inhomogeneities that occur in sedimenting fiber suspensions.
NASA Technical Reports Server (NTRS)
Neal, L.
1981-01-01
A simple numerical algorithm was developed for use in computer simulations of systems which are both stiff and stable. The method is implemented in subroutine form and applied to the simulation of physiological systems.
A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator
Engelmann, Christian; Naughton, III, Thomas J.
2016-03-22
Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less
A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator
Engelmann, Christian; Naughton, III, Thomas J.
2016-03-22
Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matching algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
The control algorithm improving performance of electric load simulator
NASA Astrophysics Data System (ADS)
Guo, Chenxia; Yang, Ruifeng; Zhang, Peng; Fu, Mengyao
2017-01-01
In order to improve dynamic performance and signal tracking accuracy of electric load simulator, the influence of the moment of inertia, stiffness, friction, gaps and other factors on the system performance were analyzed on the basis of researching the working principle of load simulator in this paper. The PID controller based on Wavelet Neural Network was used to achieve the friction nonlinear compensation, while the gap inverse model was used to compensate the gap nonlinear. The compensation results were simulated by MATLAB software. It was shown that the follow-up performance of sine response curve of the system became better after compensating, the track error was significantly reduced, the accuracy was improved greatly and the system dynamic performance was improved.
NASA Astrophysics Data System (ADS)
Karimabadi, Homa
2012-03-01
Recent advances in simulation technology and hardware are enabling breakthrough science where many longstanding problems can now be addressed for the first time. In this talk, we focus on kinetic simulations of the Earth's magnetosphere and magnetic reconnection process which is the key mechanism that breaks the protective shield of the Earth's dipole field, allowing the solar wind to enter the Earth's magnetosphere. This leads to the so-called space weather where storms on the Sun can affect space-borne and ground-based technological systems on Earth. The talk will consist of three parts: (a) overview of a new multi-scale simulation technique where each computational grid is updated based on its own unique timestep, (b) Presentation of a new approach to data analysis that we refer to as Physics Mining which entails combining data mining and computer vision algorithms with scientific visualization to extract physics from the resulting massive data sets. (c) Presentation of several recent discoveries in studies of space plasmas including the role of vortex formation and resulting turbulence in magnetized plasmas.
Optimized simulations of Olami-Feder-Christensen systems using parallel algorithms
NASA Astrophysics Data System (ADS)
Dominguez, Rachele; Necaise, Rance; Montag, Eric
The sequential nature of the Olami-Feder-Christensen (OFC) model for earthquake simulations limits the benefits of parallel computing approaches because of the frequent communication required between processors. We developed a parallel version of the OFC algorithm for multi-core processors. Our data, even for relatively small system sizes and low numbers of processors, indicates that increasing the number of processors provides significantly faster simulations; producing more efficient results than previous attempts that used network-based Beowulf clusters. Our algorithm optimizes performance by exploiting the multi-core processor architecture, minimizing communication time in contrast to the networked Beowulf-cluster approaches. Our multi-core algorithm is the basis for a new algorithm using GPUs that will drastically increase the number of processors available. Previous studies incorporating realistic structural features of faults into OFC models have revealed spatial and temporal patterns observed in real earthquake systems. The computational advances presented here will allow for studying interacting networks of faults, rather than individual faults, further enhancing our understanding of the relationship between the earth's structure and the triggering process. Support for this project comes from the Chenery Research Fund, the Rashkind Family Endowment, the Walter Williams Craigie Teaching Endowment, and the Schapiro Undergraduate Research Fellowship.
Klein, Daniel J; Baym, Michael; Eckhoff, Philip
2014-01-01
Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by [Formula: see text]), stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which "success" is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%). Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria.
A Non-Intrusive Algorithm for Sensitivity Analysis of Chaotic Flow Simulations
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2017-01-01
We demonstrate a novel algorithm for computing the sensitivity of statistics in chaotic flow simulations to parameter perturbations. The algorithm is non-intrusive but requires exposing an interface. Based on the principle of shadowing in dynamical systems, this algorithm is designed to reduce the effect of the sampling error in computing sensitivity of statistics in chaotic simulations. We compare the effectiveness of this method to that of the conventional finite difference method.
NASA Astrophysics Data System (ADS)
Roussel, Marc R.; Zhu, Rui
2006-12-01
The quantitative modeling of gene transcription and translation requires a treatment of two key features: stochastic fluctuations due to the limited copy numbers of key molecules (genes, RNA polymerases, ribosomes), and delayed output due to the time required for biopolymer synthesis. Recently proposed algorithms allow for efficient simulations of such systems. However, it is critical to know whether the results of delay stochastic simulations agree with those from more detailed models of the transcription and translation processes. We present a generalization of previous delay stochastic simulation algorithms which allows both for multiple delays and for distributions of delay times. We show that delay stochastic simulations closely approximate simulations of a detailed transcription model except when two-body effects (e.g. collisions between polymerases on a template strand) are important. Finally, we study a delay stochastic model of prokaryotic transcription and translation which reproduces observations from a recent experimental study in which a single gene was expressed under the control of a repressed lac promoter in E. coli cells. This demonstrates our ability to quantitatively model gene expression using these new methods.
Blocking Moving Window algorithm: Conditioning multiple-point simulations to hydrogeological data
NASA Astrophysics Data System (ADS)
Alcolea, Andres; Renard, Philippe
2010-08-01
Connectivity constraints and measurements of state variables contain valuable information on aquifer architecture. Multiple-point (MP) geostatistics allow one to simulate aquifer architectures, presenting a predefined degree of global connectivity. In this context, connectivity data are often disregarded. The conditioning to state variables is usually carried out by minimizing a suitable objective function (i.e., solving an inverse problem). However, the discontinuous nature of lithofacies distributions and of the corresponding objective function discourages the use of traditional sensitivity-based inversion techniques. This work presents the Blocking Moving Window algorithm (BMW), aimed at overcoming these limitations by conditioning MP simulations to hydrogeological data such as connectivity and heads. The BMW evolves iteratively until convergence: (1) MP simulation of lithofacies from geological/geophysical data and connectivity constraints, where only a random portion of the domain is simulated at every iteration (i.e., the blocking moving window, whose size is user-defined); (2) population of hydraulic properties at the intrafacies; (3) simulation of state variables; and (4) acceptance or rejection of the MP simulation depending on the quality of the fit of measured state variables. The outcome is a stack of MP simulations that (1) resemble a prior geological model depicted by a training image, (2) honor lithological data and connectivity constraints, (3) correlate with geophysical data, and (4) fit available measurements of state variables well. We analyze the performance of the algorithm on a 2-D synthetic example. Results show that (1) the size of the blocking moving window controls the behavior of the BMW, (2) conditioning to state variable data enhances dramatically the initial simulation (which accounts for geological/geophysical data only), and (3) connectivity constraints speed up the convergence but do not enhance the stack if the number of iterations
A Grand Canonical Monte Carlo-Brownian dynamics algorithm for simulating ion channels.
Im, W; Seefeld, S; Roux, B
2000-01-01
A computational algorithm based on Grand Canonical Monte Carlo (GCMC) and Brownian Dynamics (BD) is described to simulate the movement of ions in membrane channels. The proposed algorithm, GCMC/BD, allows the simulation of ion channels with a realistic implementation of boundary conditions of concentration and transmembrane potential. The method is consistent with a statistical mechanical formulation of the equilibrium properties of ion channels (; Biophys. J. 77:139-153). The GCMC/BD algorithm is illustrated with simulations of simple test systems and of the OmpF porin of Escherichia coli. The approach provides a framework for simulating ion permeation in the context of detailed microscopic models. PMID:10920012
A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Rao, Hariprasad Nannapaneni
1989-01-01
The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.
MODA: a new algorithm to compute optical depths in multidimensional hydrodynamic simulations
NASA Astrophysics Data System (ADS)
Perego, Albino; Gafton, Emanuel; Cabezón, Rubén; Rosswog, Stephan; Liebendörfer, Matthias
2014-08-01
Aims: We introduce the multidimensional optical depth algorithm (MODA) for the calculation of optical depths in approximate multidimensional radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Methods: Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any predefined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we use a tree structure that is otherwise used for searching neighbors and calculating gravity. Results: In a series of numerical experiments, we compare the MODA results with analytically known solutions. We also use snapshots from actual 3D simulations and compare the results of MODA with those obtained with other methods, such as the global and local ray-by-ray method. It turns out that MODA achieves excellent accuracy at a moderate computational cost. In appendix we also discuss implementation details and parallelization strategies.
Material growth in thermoelastic continua: Theory, algorithmics, and simulation
NASA Astrophysics Data System (ADS)
Vignes, Chet Monroe
Within the medical community, there has been increasing interest in understanding material growth in biomaterials. Material growth is the capability of a biomaterial to gain or lose mass. This research interest is driven by the host of health implications and medical problems related to this unique biomaterial property. Health providers are keen to understand the role of growth in healing and recovery so that surgical techniques, medical procedures, and physical therapy may be designed and implemented to stimulate healing and minimize recovery time. With this motivation, research seeks to identify and model mechanisms of material growth as well as growth-inducing factors in biomaterials. To this end, a theoretical formulation of stress-induced volumetric material growth in thermoelastic continua is developed. The theory derives, without the classical continuum mechanics assumption of mass conservation, the balance laws governing the mechanics of solids capable of growth. Also, a proposed extension of classical thermodynamic theory provides a foundation for developing general constitutive relations. The theory is consistent in the sense that classical thermoelastic continuum theory is embedded as a special case. Two growth mechanisms, a kinematic and a constitutive contribution, coupled in the most general case of growth, are identified. This identification allows for the commonly employed special cases of density-preserving growth and volume-preserving growth to be easily recovered. In the theory, material growth is regulated by a three-surface activation criterion and corresponding flow rules. A simple model for rate-independent finite growth is proposed based on this formulation. The associated algorithmic implementation, including a method for solving the underlying differential/algebraic equations for growth, is examined in the context of an implicit finite element method. Selected numerical simulations are presented that showcase the predictive capacity of the
Comparison of simulated quenching algorithms for design of diffractive optical elements.
Liu, J S; Caley, A J; Waddie, A J; Taghizadeh, M R
2008-02-20
We compare the performance of very fast simulated quenching; generalized simulated quenching, which unifies classical Boltzmann simulated quenching and Cauchy fast simulated quenching; and variable step size simulated quenching. The comparison is carried out by applying these algorithms to the design of diffractive optical elements for beam shaping of monochromatic, spatially incoherent light to a tightly focused image spot, whose central lobe should be smaller than the geometrical-optics limit. For generalized simulated quenching we choose values of visiting and acceptance shape parameters recommended by other investigators and use both a one-dimensional and a multidimensional Tsallis random number generator. We find that, under our test conditions, variable step size simulated quenching, which generates each parameter's new states based on the acceptance ratio instead of a certain theoretical probability distribution, produces the best results. Finally, we demonstrate experimentally a tightly focused image spot, with a central lobe 0.22-0.68 times the geometrical-optics limit and a relative sidelobe intensity 55%-60% that of the central maximum intensity.
Macro-micro interlocked simulation algorithm: an exemplification for aurora arc evolution
NASA Astrophysics Data System (ADS)
Sato, Tetsuya; Hasegawa, Hiroki; Ohno, Nobuaki
2009-01-01
Using an innovative holistic simulation algorithm that can self-consistently treat a system that evolves as cooperation between macroscopic and microscopic processes, the evolution of a colorful aurora arc is beautifully reproduced as the result of cooperation between the global field-aligned feedback instability of the coupled magnetosphere-ionosphere system and the ensuing microscopic ion-acoustic instability that generates electric double layers and accelerates aurora electrons. These results are in agreement with rocket and satellite observations. This shows that the proposed holistic algorithm could be a reliable tool to reveal complex real dramatic events and become, in the near future, a viable scientifically secure prediction tool for natural disasters such as earthquakes, landslides and floods caused by typhoons.
NASA Astrophysics Data System (ADS)
Yang, Yu; Xuping, Zhang
2007-03-01
Morphological definition of similarity degree of gray-scale image and general definition of morphological correlation (GMC) are proposed. Hardware and software design for a compact joint transform correlator are presented in order to implement GMC. Two kinds of modified general morphological correlation algorithm are proposed. The gray-scale image is decomposed into a set of binary image slices in certain decomposition method. In the first algorithm, the edge of each binary joint image slice is detected, width adjustability of which is investigated, and the joint power spectrum of the edge is summed. In the second algorithm, the joint power spectrum of each pair is binarized or thinned and then summed in one situation, and the summation of the joint power spectrums of these pairs is binarized or thinned in the other situation. Computer-simulation results and real face image recognition results indicate that the modified algorithm can improve the discrimination capabilities with respect to the gray-scale face images of high similarity.
Design and simulation of imaging algorithm for Fresnel telescopy imaging system
NASA Astrophysics Data System (ADS)
Lv, Xiao-yu; Liu, Li-ren; Yan, Ai-min; Sun, Jian-feng; Dai, En-wen; Li, Bing
2011-06-01
Fresnel telescopy (short for Fresnel telescopy full-aperture synthesized imaging ladar) is a new high resolution active laser imaging technique. This technique is a variant of Fourier telescopy and optical scanning holography, which uses Fresnel zone plates to scan target. Compare with synthetic aperture imaging ladar(SAIL), Fresnel telescopy avoids problem of time synchronization and space synchronization, which decreasing technical difficulty. In one-dimensional (1D) scanning operational mode for moving target, after time-to-space transformation, spatial distribution of sampling data is non-uniform because of the relative motion between target and scanning beam. However, as we use fast Fourier transform (FFT) in the following imaging algorithm of matched filtering, distribution of data should be regular and uniform. We use resampling interpolation to transform the data into two-dimensional (2D) uniform distribution, and accuracy of resampling interpolation process mainly affects the reconstruction results. Imaging algorithms with different resampling interpolation algorithms have been analysis and computer simulation are also given. We get good reconstruction results of the target, which proves that the designed imaging algorithm for Fresnel telescopy imaging system is effective. This work is found to have substantial practical value and offers significant benefit for high resolution imaging system of Fresnel telescopy laser imaging ladar.
A fast algorithm for voxel-based deterministic simulation of X-ray imaging
NASA Astrophysics Data System (ADS)
Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee
2008-04-01
Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video
Xu, Lin
2014-10-01
Sudden cardiac arrest is one of the critical clinical syndromes in emergency situations. A cardiopulmonary resuscitation (CPR) is a necessary curing means for those patients with sudden cardiac arrest. In order to simulate effectively the hemodynamic effects of human under AEI-CPR, which is active compression-decompression CPR coupled with enhanced external counter-pulsation and inspiratory impedance threshold valve, and research physiological parameters of each part of lower limbs in more detail, a CPR simulation model established by Babbs was refined. The part of lower limbs was divided into iliac, thigh and calf, which had 15 physiological parameters. Then, these 15 physiological parameters based on genetic algorithm were optimized, and ideal simulation results were obtained finally.
A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations.
Nukala, Phani K V V; Kent, P R C
2009-05-28
We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N(2)) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N(2)) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN(2)) work and O(MN(2)) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.
Synchronization Algorithms for Co-Simulation of Power Grid and Communication Networks
Ciraci, Selim; Daily, Jeffrey A.; Agarwal, Khushbu; Fuller, Jason C.; Marinovici, Laurentiu D.; Fisher, Andrew R.
2014-09-11
The ongoing modernization of power grids consists of integrating them with communication networks in order to achieve robust and resilient control of grid operations. To understand the operation of the new smart grid, one approach is to use simulation software. Unfortunately, current power grid simulators at best utilize inadequate approximations to simulate communication networks, if at all. Cooperative simulation of specialized power grid and communication network simulators promises to more accurately reproduce the interactions of real smart grid deployments. However, co-simulation is a challenging problem. A co-simulation must manage the exchange of informa- tion, including the synchronization of simulator clocks, between all simulators while maintaining adequate computational perfor- mance. This paper describes two new conservative algorithms for reducing the overhead of time synchronization, namely Active Set Conservative and Reactive Conservative. We provide a detailed analysis of their performance characteristics with respect to the current state of the art including both conservative and optimistic synchronization algorithms. In addition, we provide guidelines for selecting the appropriate synchronization algorithm based on the requirements of the co-simulation. The newly proposed algorithms are shown to achieve as much as 14% and 63% im- provement, respectively, over the existing conservative algorithm.
Use of a novel Hill-climbing genetic algorithm in protein folding simulations.
Cooper, Lee R; Corne, David W; Crabbe, M James C
2003-12-01
We have developed a novel Hill-climbing genetic algorithm (GA) for simulation of protein folding. The program (written in C) builds a set of Cartesian points to represent an unfolded polypeptide's backbone. The dihedral angles determining the chain's configuration are stored in an array of chromosome structures that is copied and then mutated. The fitness of the mutated chain's configuration is determined by its radius of gyration. A four-helix bundle was used to optimise simulation conditions, and the program was compared with other, larger, genetic algorithms on a variety of structures. The program ran 50% faster than other GA programs. Overall, tests on 100 non-redundant structures gave comparable results to other genetic algorithms, with the Hill-climbing program running from between 20 and 50% faster. Examples including crambin, cytochrome c, cytochrome B and hemerythrin gave good secondary structure fits with overall alpha carbon atom rms deviations of between 5 and 5.6 A with an optimised hydrophobic term in the fitness function.
Simulating Future GPS Clock Scenarios with Two Composite Clock Algorithms
2010-11-01
Alexandria, Virginia), pp. 223-242. [8] C. A. Greenhall, 2007, “A Kalman filter clock ensemble algorithm that admits measurement noise,” Metrologia ...43, S311-S321. [9] J. A. Davis, C. A. Greenhall, and P. W. Stacey, 2005, “A Kalman filter clock algorithm for use in the presence of flicker frequency modulation noise,” Metrologia , 42, 1-10.
Fast Plasma Instrument for MMS: Data Compression Simulation Results
NASA Technical Reports Server (NTRS)
Barrie, A.; Adrian, Mark L.; Yeh, P.-S.; Winkert, G. E.; Lobell, J. V.; Vinas, A.F.; Simpson, D. J.; Moore, T. E.
2008-01-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eights (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6 deg x 180 deg fields-of-view (FOV) are set 90 deg apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45 deg x 180 deg fan about its nominal viewing (0 deg deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the results in the DES complement of a given spacecraft generating 6.5-Mbs(exp -1) of electron data while the DIS generates 1.1-Mbs(exp -1) of ion data yielding an FPI total data rate of 6.6-MBs(exp -1). The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mbs(exp -1). Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re-processed Cluster/PEACE electron measurements. Topics to be discussed include: review of compression algorithm; data quality
Fast Plasma Instrument for MMS: Data Compression Simulation Results
NASA Astrophysics Data System (ADS)
Barrie, A.; Adrian, M. L.; Yeh, P.; Winkert, G.; Lobell, J.; Vinas, A. F.; Simpson, D. G.
2009-12-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° x 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° x 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 6.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present updated simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data as well as the FPI-DIS ion data. Compression analysis is based upon a seed of re-processed Cluster
Fast Plasma Instrument for MMS: Data Compression Simulation Results
NASA Astrophysics Data System (ADS)
Barrie, A. C.; Adrian, M. L.; Yeh, P.; Winkert, G. E.; Lobell, J. V.; Viňas, A. F.; Simpson, D. G.; Moore, T. E.
2008-12-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° × 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° × 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 7.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm- based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re- processed Cluster/PEACE electron measurements. Topics to be
Ferrauto, Tomassino; Parisi, Domenico; Di Stefano, Gabriele; Baldassarre, Gianluca
2013-01-01
Organisms that live in groups, from microbial symbionts to social insects and schooling fish, exhibit a number of highly efficient cooperative behaviors, often based on role taking and specialization. These behaviors are relevant not only for the biologist but also for the engineer interested in decentralized collective robotics. We address these phenomena by carrying out experiments with groups of two simulated robots controlled by neural networks whose connection weights are evolved by using genetic algorithms. These algorithms and controllers are well suited to autonomously find solutions for decentralized collective robotic tasks based on principles of self-organization. The article first presents a taxonomy of role-taking and specialization mechanisms related to evolved neural network controllers. Then it introduces two cooperation tasks, which can be accomplished by either role taking or specialization, and uses these tasks to compare four different genetic algorithms to evaluate their capacity to evolve a suitable behavioral strategy, which depends on the task demands. Interestingly, only one of the four algorithms, which appears to have more biological plausibility, is capable of evolving role taking or specialization when they are needed. The results are relevant for both collective robotics and biology, as they can provide useful hints on the different processes that can lead to the emergence of specialization in robots and organisms.
LMS learning algorithms: misconceptions and new results on converence.
Wang, Z Q; Manry, M T; Schiano, J L
2000-01-01
The Widrow-Hoff delta rule is one of the most popular rules used in training neural networks. It was originally proposed for the ADALINE, but has been successfully applied to a few nonlinear neural networks as well. Despite its popularity, there exist a few misconceptions on its convergence properties. In this paper we consider repetitive learning (i.e., a fixed set of samples are used for training) and provide an in-depth analysis in the least mean square (LMS) framework. Our main result is that contrary to common belief, the nonbatch Widrow-Hoff rule does not converge in general. It converges only to a limit cycle.
Kolakowska, A; Novotny, M A; Korniss, G
2003-04-01
We consider parallel simulations for asynchronous systems employing L processing elements that are arranged on a ring. Processors communicate only among the nearest neighbors and advance their local simulated time only if it is guaranteed that this does not violate causality. In simulations with no constraints, in the infinite L limit the utilization scales [Korniss et al., Phys. Rev. Lett. 84, 1351 (2000)]; but, the width of the virtual time horizon diverges (i.e., the measurement phase of the algorithm does not scale). In this work, we introduce a moving Delta-window global constraint, which modifies the algorithm so that the measurement phase scales as well. We present results of systematic studies in which the system size (i.e., L and the volume load per processor) as well as the constraint are varied. The Delta constraint eliminates the extreme fluctuations in the virtual time horizon, provides a bound on its width, and controls the average progress rate. The width of the Delta window can serve as a tuning parameter that, for a given volume load per processor, could be adjusted to optimize the utilization, so as to maximize the efficiency. This result may find numerous applications in modeling the evolution of general spatially extended short-range interacting systems with asynchronous dynamics, including dynamic Monte Carlo studies.
New Algorithms for Computing the Time-to-Collision in Freeway Traffic Simulation Models
Hou, Jia; List, George F.; Guo, Xiucheng
2014-01-01
Ways to estimate the time-to-collision are explored. In the context of traffic simulation models, classical lane-based notions of vehicle location are relaxed and new, fast, and efficient algorithms are examined. With trajectory conflicts being the main focus, computational procedures are explored which use a two-dimensional coordinate system to track the vehicle trajectories and assess conflicts. Vector-based kinematic variables are used to support the calculations. Algorithms based on boxes, circles, and ellipses are considered. Their performance is evaluated in the context of computational complexity and solution time. Results from these analyses suggest promise for effective and efficient analyses. A combined computation process is found to be very effective. PMID:25628650
Farmer, Terry G.; Edgar, Thomas F.
2009-01-01
The effectiveness of closed-loop insulin infusion algorithms is assessed for three different mathematical models describing insulin and glucose dynamics within a Type I diabetes patient. Simulations are performed to assess the effectiveness of proportional plus integral plus derivative (PID) control, feedforward control, and a physiologically-based control system with respect to maintaining normal glucose levels during a meal and during exercise. Control effectiveness is assessed by comparing the simulated response to a simulation of a healthy patient during both a meal and exercise and establishing maximum and minimum glucose levels and insulin infusion levels, as well as maximum duration of hyperglycemia. Controller effectiveness is assessed within the minimal model, the Sorensen model, and the Hovorka model. Results showed that no type of control was able to maintain normal conditions when simulations were performed using the minimal model. For both the Sorensen model and the Hovorka model, proportional control was sufficient to maintain normal glucose levels. Given published clinical data showing the ineffectiveness of PID control in patients, the work demonstrates that controller success based on simulation results can be misleading, and that future work should focus on addressing the model discrepancies. PMID:20161147
Foam flooding reservoir simulation algorithm improvement and application
NASA Astrophysics Data System (ADS)
Wang, Yining; Wu, Xiaodong; Wang, Ruihe; Lai, Fengpeng; Zhang, Hanhan
2014-05-01
As one of the important enhanced oil recovery (EOR) technologies, Foam flooding is being used more and more widely in the oil field development. In order to describe and predict foam flooding, experts at domestic and abroad have established a number of mathematical models of foam flooding (mechanism, empirical and semi-empirical models). Empirical models require less data and apply conveniently, but the accuracy is not enough. The aggregate equilibrium model can describe foam generation, burst and coalescence by mechanism studying, but it is very difficult to accurately describe. The research considers the effects of critical water saturation, critical concentration of foaming agent and critical oil saturation on the sealing ability of foam and considers the effect of oil saturation on the resistance factor for obtaining the gas phase relative permeability and the results were amended by laboratory test, so the accuracy rate is higher. Through the reservoir development concepts simulation and field practical application, the calculation is more accurate and higher.
NASA Astrophysics Data System (ADS)
Islam, Sirajul; Talukdar, Bipul
2016-09-01
A Linked Simulation-Optimization (LSO) model based on a Clonal Selection Algorithm (CSA) was formulated for application in conjunctive irrigation management. A series of measures were considered for reducing the computational burden associated with the LSO approach. Certain modifications were incurred to the formulated CSA, so as to decrease the number of function evaluations. In addition, a simple problem specific code for a two dimensional groundwater flow simulation model was developed. The flow model was further simplified by a novel approach of area reduction, in order to save computational time in simulation. The LSO model was applied in the irrigation command of the Pagladiya Dam Project in Assam, India. With a view to evaluate the performance of the CSA, a Genetic Algorithm (GA) was used as a comparison base. The results from the CSA compared well with those from the GA. In fact, the CSA was found to consume less computational time than the GA while converging to the optimal solution, due to the modifications incurred in it.
The Research on Web-Based Testing Environment Using Simulated Annealing Algorithm
2014-01-01
The computerized evaluation is now one of the most important methods to diagnose learning; with the application of artificial intelligence techniques in the field of evaluation, the computerized adaptive testing gradually becomes one of the most important evaluation methods. In this test, the computer dynamic updates the learner's ability level and selects tailored items from the item pool. In order to meet the needs of the test it requires that the system has a relatively high efficiency of the implementation. To solve this problem, we proposed a novel method of web-based testing environment based on simulated annealing algorithm. In the development of the system, through a series of experiments, we compared the simulated annealing method and other methods of the efficiency and efficacy. The experimental results show that this method ensures choosing nearly optimal items from the item bank for learners, meeting a variety of assessment needs, being reliable, and having valid judgment in the ability of learners. In addition, using simulated annealing algorithm to solve the computing complexity of the system greatly improves the efficiency of select items from system and near-optimal solutions. PMID:24959600
Thermoluminescence curves simulation using genetic algorithm with factorial design
NASA Astrophysics Data System (ADS)
Popko, E. A.; Weinstein, I. A.
2016-05-01
The evolutionary approach is an effective optimization tool for numeric analysis of thermoluminescence (TL) processes to assess the microparameters of kinetic models and to determine its effects on the shape of TL peaks. In this paper, the procedure for tuning of genetic algorithm (GA) is presented. This approach is based on multifactorial experiment and allows choosing intrinsic mechanisms of evolutionary operators which provide the most efficient algorithm performance. The proposed method is tested by considering the “one trap-one recombination center” (OTOR) model as an example and advantages for approximation of experimental TL curves are shown.
Dai, Chenyun; Li, Yejin; Christie, Anita; Bonato, Paolo; McGill, Kevin C; Clancy, Edward A
2015-01-01
The reliability of clinical and scientific information provided by algorithms that automatically decompose the electromyogram (EMG) depends on the algorithms' accuracies. We used experimental and simulated data to assess the agreement and accuracy of three publicly available decomposition algorithms-EMGlab (McGill , 2005) (single channel data only), Fuzzy Expert (Erim and Lim, 2008) and Montreal (Florestal , 2009). Data consisted of quadrifilar needle EMGs from the tibialis anterior of 12 subjects at 10%, 20% and 50% maximum voluntary contraction (MVC); single channel needle EMGs from the biceps brachii of 10 controls and 10 patients during contractions just above threshold; and matched simulated data. Performance was assessed via agreement between pairs of algorithms for experimental data and accuracy with respect to the known decomposition for simulated data. For the quadrifilar experimental data, median agreements between the Montreal and Fuzzy Expert algorithms at 10%, 20%, and 50% MVC were 95%, 86%, and 64%, respectively. For the single channel control and patient data, median agreements between the three algorithm pairs were statistically similar at ∼ 97% and ∼ 92%, respectively. Accuracy on the simulated data exceeded this performance. Agreement/accuracy was strongly related to the Decomposability Index (Florestal , 2009). When agreement was high between algorithm pairs applied to simulated data, so was accuracy.
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks
Vestergaard, Christian L.; Génois, Mathieu
2015-01-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.
Vestergaard, Christian L; Génois, Mathieu
2015-10-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.
The EM/MPM algorithm for segmentation of textured images: analysis and further experimental results.
Comer, M L; Delp, E J
2000-01-01
In this paper we present new results relative to the "expectation-maximization/maximization of the posterior marginals" (EM/MPM) algorithm for simultaneous parameter estimation and segmentation of textured images. The EM/MPM algorithm uses a Markov random field model for the pixel class labels and alternately approximates the MPM estimate of the pixel class labels and estimates parameters of the observed image model. The goal of the EM/MPM algorithm is to minimize the expected value of the number of misclassified pixels. We present new theoretical results in this paper which show that the algorithm can be expected to achieve this goal, to the extent that the EM estimates of the model parameters are close to the true values of the model parameters. We also present new experimental results demonstrating the performance of the EM/MPM algorithm.
DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM
The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...
Simulated annealing algorithm for solving chambering student-case assignment problem
NASA Astrophysics Data System (ADS)
Ghazali, Saadiah; Abdul-Rahman, Syariza
2015-12-01
The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.
NASA Astrophysics Data System (ADS)
Albert, J.
2016-12-01
Stochastic simulation of reaction networks is limited by two factors: accuracy and time. The Gillespie algorithm (GA) is a Monte Carlo-type method for constructing probability distribution functions (pdf) from statistical ensembles. Its accuracy is therefore a function of the computing time. The chemical master equation (CME) is a more direct route to obtaining the pdfs, however, solving the CME is generally very difficult for large networks. We propose a method that combines both approaches in order to simulate stochastically a part of a network. The network is first divided into two parts: A and B. Part A is simulated using the GA, while the solution of the CME for part B, with initial conditions imposed by simulation results of part A, is fed back into the GA. This cycle is then repeated a desired number of times. The advantage of this synergy between the two approaches is: 1) the GA needs to simulate only a part of the whole network, and hence is faster, and 2) the CME is necessarily simpler to solve, as the part of the network it describes is smaller. We will demonstrate on two examples - a positive feedback (genetic switch) and oscillations driven by a negative feedback - the utility of this approach.
Measurement and Simulation Results of Ti Coated Microwave Absorber
Sun, Ding; McGinnis, Dave; /Fermilab
1998-11-01
When microwave absorbers are put in a waveguide, a layer of resistive coating can change the distribution of the E-M fields and affect the attenuation of the signal within the microwave absorbers. In order to study such effect, microwave absorbers (TT2-111) were coated with titanium thin film. This report is a document on the coating process and measurement results. The measurement results have been used to check the simulation results from commercial software HFSS (High Frequency Structure Simulator.)
A Linac Simulation Code for Macro-Particles Tracking and Steering Algorithm Implementation
sun, yipeng
2012-05-03
In this paper, a linac simulation code written in Fortran90 is presented and several simulation examples are given. This code is optimized to implement linac alignment and steering algorithms, and evaluate the accelerator errors such as RF phase and acceleration gradient, quadrupole and BPM misalignment. It can track a single particle or a bunch of particles through normal linear accelerator elements such as quadrupole, RF cavity, dipole corrector and drift space. One-to-one steering algorithm and a global alignment (steering) algorithm are implemented in this code.
NASA Astrophysics Data System (ADS)
Lambrakos, S. G.; Boris, J. P.; Oran, E. S.; Chandrasekhar, I.; Nagumo, M.
1989-12-01
We present a new modification of the SHAKE algorithm, MSHAKE, that maintains fixed distances in molecular dynamics simulations of polyatomic molecules. The MSHAKE algorithm, which is applied by modifying the leapfrog algorithm to include forces of constraint, computes an initial estimate of constraint forces, then iteratively corrects the constraint forces required to maintain the fixed distances. Thus MSHAKE should always converge more rapidly than SHAKE. Further, the explicit determination of the constraint forces at each timestep makes MSHAKE convenient for use in molecular dynamics simulations where bond stress is a significant dynamical quantity.
Electron-cloud simulation results for the SPS and recent results for the LHC
Furman, M.A.; Pivi, M.T.F.
2002-06-19
We present an update of computer simulation results for some features of the electron cloud at the Large Hadron Collider (LHC) and recent simulation results for the Super Proton Synchrotron (SPS). We focus on the sensitivity of the power deposition on the LHC beam screen to the emitted electron spectrum, which we study by means of a refined secondary electron (SE) emission model recently included in our simulation code.
A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results
NASA Technical Reports Server (NTRS)
Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert
2014-01-01
This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.
Computer simulation results of attitude estimation of earth orbiting satellites
NASA Technical Reports Server (NTRS)
Kou, S. R.
1976-01-01
Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.
NASA Astrophysics Data System (ADS)
Roh, Min K.; Daigle, Bernie J.; Gillespie, Dan T.; Petzold, Linda R.
2011-12-01
In recent years there has been substantial growth in the development of algorithms for characterizing rare events in stochastic biochemical systems. Two such algorithms, the state-dependent weighted stochastic simulation algorithm (swSSA) and the doubly weighted SSA (dwSSA) are extensions of the weighted SSA (wSSA) by H. Kuwahara and I. Mura [J. Chem. Phys. 129, 165101 (2008)], 10.1063/1.2987701. The swSSA substantially reduces estimator variance by implementing system state-dependent importance sampling (IS) parameters, but lacks an automatic parameter identification strategy. In contrast, the dwSSA provides for the automatic determination of state-independent IS parameters, thus it is inefficient for systems whose states vary widely in time. We present a novel modification of the dwSSA—the state-dependent doubly weighted SSA (sdwSSA)—that combines the strengths of the swSSA and the dwSSA without inheriting their weaknesses. The sdwSSA automatically computes state-dependent IS parameters via the multilevel cross-entropy method. We apply the method to three examples: a reversible isomerization process, a yeast polarization model, and a lac operon model. Our results demonstrate that the sdwSSA offers substantial improvements over previous methods in terms of both accuracy and efficiency.
Bhattacharjee, Deblina; Paul, Anand; Kim, Jeong Hong; Kim, Mucheol
2016-01-01
The analysis of leukocyte images has drawn interest from fields of both medicine and computer vision for quite some time where different techniques have been applied to automate the process of manual analysis and classification of such images. Manual analysis of blood samples to identify leukocytes is time-consuming and susceptible to error due to the different morphological features of the cells. In this article, the nature-inspired plant growth simulation algorithm has been applied to optimize the image processing technique of object localization of medical images of leukocytes. This paper presents a random bionic algorithm for the automated detection of white blood cells embedded in cluttered smear and stained images of blood samples that uses a fitness function that matches the resemblances of the generated candidate solution to an actual leukocyte. The set of candidate solutions evolves via successive iterations as the proposed algorithm proceeds, guaranteeing their fit with the actual leukocytes outlined in the edge map of the image. The higher precision and sensitivity of the proposed scheme from the existing methods is validated with the experimental results of blood cell images. The proposed method reduces the feasible sets of growth points in each iteration, thereby reducing the required run time of load flow, objective function evaluation, thus reaching the goal state in minimum time and within the desired constraints.
Electron-cloud simulation results for the PSR and SNS
Pivi, M.; Furman, M.A.
2002-07-08
We present recent simulation results for the main features of the electron cloud in the storage ring of the Spallation Neutron Source (SNS) at Oak Ridge, and updated results for the Proton Storage Ring (PSR) at Los Alamos. In particular, a complete refined model for the secondary emission process including the so called true secondary, rediffused and backscattered electrons has been included in the simulation code.
Simulation of Anderson localization in a random fiber using a fast Fresnel diffraction algorithm
NASA Astrophysics Data System (ADS)
Davis, Jeffrey A.; Cottrell, Don M.
2016-06-01
Anderson localization has been previously demonstrated both theoretically and experimentally for transmission of a Gaussian beam through long distances in an optical fiber consisting of a random array of smaller fibers, each having either a higher or lower refractive index. However, the computational times were extremely long. We show how to simulate these results using a fast Fresnel diffraction algorithm. In each iteration of this approach, the light passes through a phase mask, undergoes Fresnel diffraction over a small distance, and then passes through the same phase mask. We also show results where we use a binary amplitude mask at the input that selectively illuminates either the higher or the lower index fibers. Additionally, we examine imaging of various sized objects through these fibers. In all cases, our results are consistent with other computational methods and experimental results, but with a much reduced computational time.
On the rejection-based algorithm for simulation and analysis of large-scale reaction networks
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2015-06-28
Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.
2016-02-01
AFRL-RY-WP-TR-2016-0006 SENSITIVITY SIMULATION OF COMPRESSED SENSING BASED ELECTRONIC WARFARE RECEIVER USING ORTHOGONAL MATCHING PURSUIT...TITLE AND SUBTITLE SENSITIVITY SIMULATION OF COMPRESSED SENSING BASED ELECTRONIC WARFARE RECEIVER USING ORTHOGONAL MATCHING PURSUIT ALGORITHM 5a...August 2014. Report contains color. 14. ABSTRACT The wideband coverage of the traditional fast Fourier transform (FFT)-based electronic warfare
On the rejection-based algorithm for simulation and analysis of large-scale reaction networks
NASA Astrophysics Data System (ADS)
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2015-06-01
Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.
MIA computer simulation test results report. [space shuttle avionics
NASA Technical Reports Server (NTRS)
Unger, G. E.
1974-01-01
Results of the first noise susceptibility computer simulation tests of the complete MIA receiver analytical model are presented. Computer simulation tests were conducted with both Gaussian and pulse noise inputs. The results of the Gaussian noise tests were compared to results predicted previously and were found to be in substantial agreement. The results of the pulse noise tests will be compared to the results of planned analogous tests in the Data Bus Evaluation Laboratory at a later time. The MIA computer model is considered to be fully operational at this time.
Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII
McKinney, Gregg W
2012-07-17
Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.
NASA Astrophysics Data System (ADS)
Jiang, Chunhua; Yang, Guobin; Zhu, Peng; Nishioka, Michi; Yokoyama, Tatsuhiro; Zhou, Chen; Song, Huan; Lan, Ting; Zhao, Zhengyu; Zhang, Yuannong
2016-05-01
This paper presents a new method to reconstruct the vertical electron density profile based on vertical Total Electron Content (TEC) using the simulated annealing algorithm. The present technique used the Quasi-parabolic segments (QPS) to model the bottomside ionosphere. The initial parameters of the ionosphere model were determined from both International Reference Ionosphere (IRI) (Bilitza et al., 2014) and vertical TEC (vTEC). Then, the simulated annealing algorithm was used to search the best-fit parameters of the ionosphere model by comparing with the GPS-TEC. The performance and robust of this technique were verified by ionosonde data. The critical frequency (foF2) and peak height (hmF2) of the F2 layer obtained from ionograms recorded at different locations and on different days were compared with those calculated by the proposed method. The analysis of results shows that the present method is inspiring for obtaining foF2 from vTEC. However, the accuracy of hmF2 needs to be improved in the future work.
Godfrey, Brendan B.; Vay, Jean-Luc
2013-09-01
Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.
A new cut-cell algorithm for DSMC simulations of rarefied gas flows around immersed moving objects
NASA Astrophysics Data System (ADS)
Jin, Wenjie; Ommen, J. Ruud van; Kleijn, Chris R.
2017-03-01
Direct Simulation Monte Carlo (DSMC) is a widely applied numerical technique to simulate rarefied gas flows. For flows around immersed moving objects, the use of body fitted meshes is inefficient, whereas published methods using cut-cells in a fixed background mesh have important limitations. We present a novel cut-cell algorithm, which allows for accurate DSMC simulations around arbitrarily shaped moving objects. The molecule-surface interaction occurs exactly at the instantaneous collision point on the moving body surface, and accounts for its instantaneous velocity, thus precisely imposing the desired boundary conditions. A simple algorithm to calculate the effective volume of cut cells is presented and shown to converge linearly with grid refinement. The potential and efficiency of method is demonstrated by calculating rarefied gas flow drag forces on steady and moving immersed spheres. The obtained results are in excellent agreement with results obtained with a body-fitted mesh, and with analytical approximations for high-Knudsen number flows.
Room Acoustical Simulation Algorithm Based on the Free Path Distribution
NASA Astrophysics Data System (ADS)
VORLÄNDER, M.
2000-04-01
A new algorithm is presented which provides estimates of impulse responses in rooms. It is applicable to arbitrary shaped rooms, thus including non-diffuse spaces like workrooms or offices. In the latter cases, for instance, sound propagation curves are of interest to be applied in noise control. In the case of concert halls and opera houses, the method enables very fast predictions of room acoustical criteria like reverberation time, strength or clarity. The method is based on a low-resolved ray tracing and recording of the free paths. Estimates of impulse responses are derived from evaluation of the free path distribution and of the free path transition probabilities.
Cell light scattering characteristic numerical simulation research based on FDTD algorithm
NASA Astrophysics Data System (ADS)
Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong
2017-01-01
In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.
Experimental and simulational result multipactors in 112 MHz QWR injector
Xin, T.; Ben-Zvi, I.; Belomestnykh, S.; Brutus, J. C.; Skaritka, J.; Wu, Q.; Xiao, B.
2015-05-03
The first RF commissioning of 112 MHz QWR superconducting electron gun was done in late 2014. The coaxial Fundamental Power Coupler (FPC) and Cathode Stalk (stalk) were installed and tested for the first time. During this experiment, we observed several multipacting barriers at different gun voltage levels. The simulation work was done within the same range. The comparison between the experimental observation and the simulation results are presented in this paper. The observations during the test are consisted with the simulation predictions. We were able to overcome most of the multipacting barriers and reach 1.8 MV gun voltage under pulsed mode after several round of conditioning processes.
Aerosol kinetic code "AERFORM": Model, validation and simulation results
NASA Astrophysics Data System (ADS)
Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.
2016-06-01
The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.
A fast algorithm for the simulation of arterial pulse waves
NASA Astrophysics Data System (ADS)
Du, Tao; Hu, Dan; Cai, David
2016-06-01
One-dimensional models have been widely used in studies of the propagation of blood pulse waves in large arterial trees. Under a periodic driving of the heartbeat, traditional numerical methods, such as the Lax-Wendroff method, are employed to obtain asymptotic periodic solutions at large times. However, these methods are severely constrained by the CFL condition due to large pulse wave speed. In this work, we develop a new numerical algorithm to overcome this constraint. First, we reformulate the model system of pulse wave propagation using a set of Riemann variables and derive a new form of boundary conditions at the inlet, the outlets, and the bifurcation points of the arterial tree. The new form of the boundary conditions enables us to design a convergent iterative method to enforce the boundary conditions. Then, after exchanging the spatial and temporal coordinates of the model system, we apply the Lax-Wendroff method in the exchanged coordinate system, which turns the large pulse wave speed from a liability to a benefit, to solve the wave equation in each artery of the model arterial system. Our numerical studies show that our new algorithm is stable and can perform ∼15 times faster than the traditional implementation of the Lax-Wendroff method under the requirement that the relative numerical error of blood pressure be smaller than one percent, which is much smaller than the modeling error.
Gentile, N A; Kalos, M H; Brunner, T A
2005-03-22
Domain decomposed Monte Carlo codes, like other domain-decomposed codes, are difficult to debug. Domain decomposition is prone to error, and interactions between the domain decomposition code and the rest of the algorithm often produces subtle bugs. These bugs are particularly difficult to find in a Monte Carlo algorithm, in which the results have statistical noise. Variations in the results due to statistical noise can mask errors when comparing the results to other simulations or analytic results. If a code can get the same result on one domain as on many, debugging the whole code is easier. This reproducibility property is also desirable when comparing results done on different numbers of processors and domains. We describe how reproducibility, to machine precision, is obtained on different numbers of domains in an Implicit Monte Carlo photonics code.
Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel
2006-12-15
Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented.
NASA Astrophysics Data System (ADS)
Endres, F.; Steinmann, P.
2014-12-01
Molecular dynamics (MD) simulations of ferroelectric materials have improved tremendously over the last few decades. Specifically, the core-shell model has been commonly used for the simulation of ferroelectric materials such as barium titanate. However, due to the computational costs of MD, the calculation of ferroelectric hysteresis behaviour, and especially the stress-strain relation, has been a computationally intense task. In this work a molecular statics algorithm, similar to a finite element method for nonlinear trusses, has been implemented. From this, an algorithm to calculate the stress dependent continuum deformation of a discrete particle system, such as a ferroelectric crystal, has been devised. Molecular statics algorithms for the atomistic simulation of ferroelectric materials have been previously described. However, in contrast to the prior literature the algorithm proposed in this work is also capable of effectively computing the macroscopic ferroelectric butterfly hysteresis behaviour. Therefore the advocated algorithm is able to calculate the piezoelectric effect as well as the converse piezoelectric effect simultaneously on atomistic and continuum length scales. Barium titanate has been simulated using the core-shell model to validate the developed algorithm.
Conduct of an algorithm in quantifying simulated palatal surface tooth erosion.
Chadwick, R G; Mitchell, H L
2001-05-01
In order to test the ability of an algorithm to quantify simulated palatal erosion, a total of 10 extracted permanent upper central incisors were mounted in brass blocks. Baseline impressions were recorded using an addition cured silicone impression material in a metal impression tray. Once set and removed from the teeth, the impressions were coated twice with a high silver content electroconductive paint, applied using a brush, before being backed up with die stone to form an electroconductive replica. Each tooth was then subject to three treatments: application of phosphoric acid etchant gel for 60 s, application of etchant gel for 120 s and immersion for 3 h in Diet Coca-Cola*. After each one the replication process was repeated. Thereafter all replicas were mapped using a computer controlled electrical probe and the resultant digital terrain models (DTMS) compared using a surface matching and difference detection algorithm (SMADDA). Surface matching was unsuccessful only in one instance. As the duration of the insult increased, so did the proportion of the surface that underwent change to a maximum of 33.3%. Anatomical site was significantly (P < 0.05) associated with the susceptibility to erosion. The cingulum periphery appeared most resistant to this. The algorithmic approach offers much scope for monitoring dental erosion as acid dissolution of the tooth's surface appears to occur gradually. The cingulum region appears relatively more resistant to this process than other tooth sites and thus facilitates the process of surface matching. Further testing is however, required to determine precisely the algorithm's upper tolerance level.
Development and evaluation of a micro-macro algorithm for the simulation of polymer flow
Feigl, Kathleen . E-mail: feigl@mtu.edu; Tanner, Franz X.
2006-07-20
A micro-macro algorithm for the calculation of polymer flow is developed and numerically evaluated. The system being solved consists of the momentum and mass conservation equations from continuum mechanics coupled with a microscopic-based rheological model for polymer stress. Standard finite element techniques are used to solve the conservation equations for velocity and pressure, while stochastic simulation techniques are used to compute polymer stress from the simulated polymer dynamics in the rheological model. The rheological model considered combines aspects of reptation, network and continuum models. Two types of spatial approximation are considered for the configuration fields defining the dynamics in the model: piecewise constant and piecewise linear. The micro-macro algorithm is evaluated by simulating the abrupt planar die entry flow of a polyisobutylene solution described in the literature. The computed velocity and stress fields are found to be essentially independent of mesh size and ensemble size, while there is some dependence of the results on the order of spatial approximation to the configuration fields close to the die entry. Comparison with experimental data shows that the piecewise linear approximation leads to better predictions of the centerline first normal stress difference. Finally, the computational time associated with the piecewise constant spatial approximation is found to be about 2.5 times lower than that associated with the piecewise linear approximation. This is the result of the more efficient time integration scheme that is possible with the former type of approximation due to the pointwise incompressibility guaranteed by the choice of velocity-pressure finite element.
Cardiovascular system and microgravity simulation and inflight results
NASA Astrophysics Data System (ADS)
Pottier, J. M.; Patat, F.; Arbeille, P.; Pourcelot, L.; Massabuau, P.; Guell, A.; Gharib, C.
Main results of cardiovascular investigation, performed with ultrasound methods during the common French/Soviet flight aboard Salyut VII in June 1982, are compared to variations of the same parameters studied during ground-based simulations on the same subject or observed by other investigators during various ground-based experiences. The antiorthostatic bed rest simulation partly reproduces microgravity conditions and seems to be better adaptated to cardiac hemodynamics, despite some differences, and to the cerebral circulation, than to the inferior limb circulation.
Interactive Computational Algorithms for Acoustic Simulation in Complex Environments
2015-07-19
simualtion for urban and other complex propagation environments. The PIs will also collaborate with Stephen Ketcham and Keith Wilson at USACE and...Albert, Keith Wilson, Dinesh Manocha. Validation of 3D numerical simulation for acoustic pulse propagation in an urban environment, The Journal of
A process-based algorithm for simulating terraces in SWAT
Technology Transfer Automated Retrieval System (TEKTRAN)
Terraces in crop fields are one of the most important soil and water conservation measures that affect runoff and erosion processes in a watershed. In large hydrological programs such as the Soil and Water Assessment Tool (SWAT), terrace effects are simulated by adjusting the slope length and the US...
Simulating Multivariate Nonnormal Data Using an Iterative Algorithm
ERIC Educational Resources Information Center
Ruscio, John; Kaczetow, Walter
2008-01-01
Simulating multivariate nonnormal data with specified correlation matrices is difficult. One especially popular method is Vale and Maurelli's (1983) extension of Fleishman's (1978) polynomial transformation technique to multivariate applications. This requires the specification of distributional moments and the calculation of an intermediate…
NASA Astrophysics Data System (ADS)
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with
Hyper-X Stage Separation: Simulation Development and Results
NASA Technical Reports Server (NTRS)
Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.
2001-01-01
This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.
Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change
NASA Astrophysics Data System (ADS)
Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel
2014-05-01
(erosion, landslide monitoring, etc) and we then tested the use of filtering techniques using 3D moving windows along the space and time, which considerably reduces data scattering due to the benefits of data redundancy. In conclusion, the simulator allowed us to improve our different algorithms and to understand how instrumental error affects final results. And also, improve the methodology of scans acquisition to find the best compromise between point density, positioning and acquisition time with the best accuracy possible to characterize the topographic change.
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
NASA Astrophysics Data System (ADS)
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-05-01
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank-Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.
Parallel algorithms for simulating continuous time Markov chains
NASA Technical Reports Server (NTRS)
Nicol, David M.; Heidelberger, Philip
1992-01-01
We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.
Direct dynamics simulations using Hessian-based predictor-corrector integration algorithms.
Lourderaj, Upakarasamy; Song, Kihyung; Windus, Theresa L; Zhuang, Yu; Hase, William L
2007-01-28
In previous research [J. Chem. Phys. 111, 3800 (1999)] a Hessian-based integration algorithm was derived for performing direct dynamics simulations. In the work presented here, improvements to this algorithm are described. The algorithm has a predictor step based on a local second-order Taylor expansion of the potential in Cartesian coordinates, within a trust radius, and a fifth-order correction to this predicted trajectory. The current algorithm determines the predicted trajectory in Cartesian coordinates, instead of the instantaneous normal mode coordinates used previously, to ensure angular momentum conservation. For the previous algorithm the corrected step was evaluated in rotated Cartesian coordinates. Since the local potential expanded in Cartesian coordinates is not invariant to rotation, the constants of motion are not necessarily conserved during the corrector step. An approximate correction to this shortcoming was made by projecting translation and rotation out of the rotated coordinates. For the current algorithm unrotated Cartesian coordinates are used for the corrected step to assure the constants of motion are conserved. An algorithm is proposed for updating the trust radius to enhance the accuracy and efficiency of the numerical integration. This modified Hessian-based integration algorithm, with its new components, has been implemented into the VENUS/NWChem software package and compared with the velocity-Verlet algorithm for the H(2)CO-->H(2)+CO, O(3)+C(3)H(6), and F(-)+CH(3)OOH chemical reactions.
Murakoshi, Kazushi; Noguchi, Takuya
2005-04-01
Brown and Wanger [Brown, R.T., Wanger, A.R., 1964. Resistance to punishment and extinction following training with shock or nonreinforcement. J. Exp. Psychol. 68, 503-507] investigated rat behaviors with the following features: (1) rats were exposed to reward and punishment at the same time, (2) environment changed and rats relearned, and (3) rats were stochastically exposed to reward and punishment. The results are that exposure to nonreinforcement produces resistance to the decremental effects of behavior after stochastic reward schedule and that exposure to both punishment and reinforcement produces resistance to the decremental effects of behavior after stochastic punishment schedule. This paper aims to simulate the rat behaviors by a reinforcement learning algorithm in consideration of appearance probabilities of reinforcement signals. The former algorithms of reinforcement learning were unable to simulate the behavior of the feature (3). We improve the former reinforcement learning algorithms by controlling learning parameters in consideration of the acquisition probabilities of reinforcement signals. The proposed algorithm qualitatively simulates the result of the animal experiment of Brown and Wanger.
Multi-Rate Digital Control Systems with Simulation Applications. Volume II. Computer Algorithms
1980-09-01
34 ~AFWAL-TR-80-31 01 • • Volume II L IL MULTI-RATE DIGITAL CONTROL SYSTEMS WITH SIMULATiON APPLICATIONS Volume II: Computer Algorithms DENNIS G. J...29 Ma -8 - Volume II. Computer Algorithms ~ / ’+ 44MWLxkQT N Uwe ~~ 4 ~jjskYIF336l5-79-C-369~ 9. PER~rORMING ORGANIZATION NAME AND ADDRESS IPROG AMEL...additional options. The analytical basis for the computer algorithms is discussed in Ref. 12. However, to provide a complete description of the program, some
Parallel implementation of the FETI-DPEM algorithm for general 3D EM simulations
NASA Astrophysics Data System (ADS)
Li, Yu-Jia; Jin, Jian-Ming
2009-05-01
A parallel implementation of the electromagnetic dual-primal finite element tearing and interconnecting algorithm (FETI-DPEM) is designed for general three-dimensional (3D) electromagnetic large-scale simulations. As a domain decomposition implementation of the finite element method, the FETI-DPEM algorithm provides fully decoupled subdomain problems and an excellent numerical scalability, and thus is well suited for parallel computation. The parallel implementation of the FETI-DPEM algorithm on a distributed-memory system using the message passing interface (MPI) is discussed in detail along with a few practical guidelines obtained from numerical experiments. Numerical examples are provided to demonstrate the efficiency of the parallel implementation.
Cobb, J.W.; Leboeuf, J.N.
1994-10-01
The authors present a particle algorithm to extend simulation capabilities for plasma based materials processing reactors. The orbit integrator uses a syncopated leap-frog algorithm in cylindrical coordinates, which maintains second order accuracy, and minimizes computational complexity. Plasma source terms are accumulated orbit consistently directly in the frequency and azimuthal mode domains. Finally they discuss the numerical analysis of this algorithm. Orbit consistency greatly reduces the computational cost for a given level of precision. The computational cost is independent of the degree of time scale separation.
Advanced Thermal Simulator Testing: Thermal Analysis and Test Results
NASA Technical Reports Server (NTRS)
Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe
2008-01-01
Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a SNAP derivative reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.
Advanced Thermal Simulator Testing: Thermal Analysis and Test Results
Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe
2008-01-21
Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the potential development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a liquid metal cooled reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.
Adaptive particle-cell algorithm for Fokker-Planck based rarefied gas flow simulations
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Gorji, M. H.
2017-04-01
Recently, the Fokker-Planck (FP) kinetic model has been devised on the basis of the Boltzmann equation (Jenny et al., 2010; Gorji et al., 2011). Particle Monte-Carlo schemes are then introduced for simulations of rarefied gas flows based on the FP kinetics. Here the particles follow independent stochastic paths and thus a spatio-temporal resolution coarser than the collisional scales becomes possible. In contrast to the direct simulation Monte-Carlo (DSMC), the computational cost is independent of the Knudsen number resulting in efficient simulations at moderate/low Knudsen flows. In order to further exploit the efficiency of the FP method, the required particle-cell resolutions should be found, and a cell refinement strategy has to be developed accordingly. In this study, an adaptive particle-cell scheme applicable to a general unstructured mesh is derived for the FP model. Virtual sub cells are introduced for the adaptive mesh refinement. Moreover a sub cell-merging algorithm is provided to honor the minimum required number of particles per cell. For assessments, the 70 degree blunted cone reentry flow (Allgre et al., 1997) is studied. Excellent agreement between the introduced adaptive FP method and DSMC is achieved.
Relationships between driving simulator performance and driving test results.
de Winter, J C F; de Groot, S; Mulder, M; Wieringa, P A; Dankelman, J; Mulder, J A
2009-02-01
This article is considered relevant because: 1) car driving is an everyday and safety-critical task; 2) simulators are used to an increasing extent for driver training (related topics: training, virtual reality, human-machine interaction); 3) the article addresses relationships between performance in the simulator and driving test results--a relevant topic for those involved in driver training and the virtual reality industries; 4) this article provides new insights about individual differences in young drivers' behaviour. Simulators are being used to an increasing extent for driver training, allowing for the possibility of collecting objective data on driver proficiency under standardised conditions. However, relatively little is known about how learner drivers' simulator measures relate to on-road driving. This study proposes a theoretical framework that quantifies driver proficiency in terms of speed of task execution, violations and errors. This study investigated the relationships between these three measures of learner drivers' (n=804) proficiency during initial simulation-based training and the result of the driving test on the road, occurring an average of 6 months later. A higher chance of passing the driving test the first time was associated with making fewer steering errors on the simulator and could be predicted in regression analysis with a correlation of 0.18. Additionally, in accordance with the theoretical framework, a shorter duration of on-road training corresponded with faster task execution, fewer violations and fewer steering errors (predictive correlation 0.45). It is recommended that researchers conduct more large-scale studies into the reliability and validity of simulator measures and on-road driving tests.
Results from Binary Black Hole Simulations in Astrophysics Applications
NASA Technical Reports Server (NTRS)
Baker, John G.
2007-01-01
Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.
Cao Yang . E-mail: ycao@cs.ucsb.edu; Gillespie, Dan . E-mail: GillespieDT@mailaps.org; Petzold, Linda . E-mail: petzold@engineering.ucsb.edu
2005-07-01
In this paper, we introduce a multiscale stochastic simulation algorithm (MSSA) which makes use of Gillespie's stochastic simulation algorithm (SSA) together with a new stochastic formulation of the partial equilibrium assumption (PEA). This method is much more efficient than SSA alone. It works even with a very small population of fast species. Implementation details are discussed, and an application to the modeling of the heat shock response of E. Coli is presented which demonstrates the excellent efficiency and accuracy obtained with the new method.
Turning Simulation into Estimation: Generalized Exchange Algorithms for Exponential Family Models
Maris, Gunter; Bechger, Timo; Glas, Cees
2017-01-01
The Single Variable Exchange algorithm is based on a simple idea; any model that can be simulated can be estimated by producing draws from the posterior distribution. We build on this simple idea by framing the Exchange algorithm as a mixture of Metropolis transition kernels and propose strategies that automatically select the more efficient transition kernels. In this manner we achieve significant improvements in convergence rate and autocorrelation of the Markov chain without relying on more than being able to simulate from the model. Our focus will be on statistical models in the Exponential Family and use two simple models from educational measurement to illustrate the contribution. PMID:28076429
NASA Astrophysics Data System (ADS)
Quillen, Alice C.; Moore, A.
2008-09-01
Planetesimal and dust dynamical simulations require collision and nearest neighbor detection. A brute force implementation for sorting interparticle distances requires O(N2) computations for N particles, limiting the numbers of particles that have been simulated. Parallel algorithms recently developed for the GPU (graphics processing unit), such as the radix sort, can run as fast as O(N) and sort distances between a million particles in a few hundred milliseconds. We introduce improvements in collision and nearest neighbor detection algorithms and how we have incorporated them into our efficient parallel 2nd order democratic heliocentric method symplectic integrator written in NVIDIA's CUDA for the GPU.
A sweep algorithm for massively parallel simulation of circuit-switched networks
NASA Technical Reports Server (NTRS)
Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.
1992-01-01
A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.
Simulation of diurnal thermal energy storage systems: Preliminary results
NASA Astrophysics Data System (ADS)
Katipamula, S.; Somasundaram, S.; Williams, H. R.
1994-12-01
This report describes the results of a simulation of thermal energy storage (TES) integrated with a simple-cycle gas turbine cogeneration system. Integrating TES with cogeneration can serve the electrical and thermal loads independently while firing all fuel in the gas turbine. The detailed engineering and economic feasibility of diurnal TES systems integrated with cogeneration systems has been described in two previous PNL reports. The objective of this study was to lay the ground work for optimization of the TES system designs using a simulation tool called TRNSYS (TRaNsient SYstem Simulation). TRNSYS is a transient simulation program with a sequential-modular structure developed at the Solar Energy Laboratory, University of Wisconsin-Madison. The two TES systems selected for the base-case simulations were: (1) a one-tank storage model to represent the oil/rock TES system; and (2) a two-tank storage model to represent the molten nitrate salt TES system. Results of the study clearly indicate that an engineering optimization of the TES system using TRNSYS is possible. The one-tank stratified oil/rock storage model described here is a good starting point for parametric studies of a TES system. Further developments to the TRNSYS library of available models (economizer, evaporator, gas turbine, etc.) are recommended so that the phase-change processes is accurately treated.
[The utility boiler low NOx combustion optimization based on ANN and simulated annealing algorithm].
Zhou, Hao; Qian, Xinping; Zheng, Ligang; Weng, Anxin; Cen, Kefa
2003-11-01
With the developing restrict environmental protection demand, more attention was paid on the low NOx combustion optimizing technology for its cheap and easy property. In this work, field experiments on the NOx emissions characteristics of a 600 MW coal-fired boiler were carried out, on the base of the artificial neural network (ANN) modeling, the simulated annealing (SA) algorithm was employed to optimize the boiler combustion to achieve a low NOx emissions concentration, and the combustion scheme was obtained. Two sets of SA parameters were adopted to find a better SA scheme, the result show that the parameters of T0 = 50 K, alpha = 0.6 can lead to a better optimizing process. This work can give the foundation of the boiler low NOx combustion on-line control technology.
NASA Technical Reports Server (NTRS)
Kaushik, Dinesh K.; Baysal, Oktay
1997-01-01
Accurate computation of acoustic wave propagation may be more efficiently performed when their dispersion relations are considered. Consequently, computational algorithms which attempt to preserve these relations have been gaining popularity in recent years. In the present paper, the extensions to one such scheme are discussed. By solving the linearized, 2-D Euler and Navier-Stokes equations with such a method for the acoustic wave propagation, several issues were investigated. Among them were higher-order accuracy, choice of boundary conditions and differencing stencils, effects of viscosity, low-storage time integration, generalized curvilinear coordinates, periodic series, their reflections and interference patterns from a flat wall and scattering from a circular cylinder. The results were found to be promising en route to the aeroacoustic simulations of realistic engineering problems.
Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms
Bosl, W J
2005-01-26
The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis
NASA Technical Reports Server (NTRS)
Morrell, F. R.; Motyka, P. R.; Bailey, M. L.
1990-01-01
Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.
Simulated performance of remote sensing ocean colour algorithms during the 1996 PRIME cruise
NASA Astrophysics Data System (ADS)
Westbrook, A. G.; Pinkerton, M. H.; Aiken, J.; Pilgrim, D. A.
Coincident pigment and underwater radiometric data were collected during a cruise along the 20°W meridian from 60°N to 37°N in the north-eastern Atlantic Ocean as part of the Natural Environment Research Council (NERC) thematic programme: plankton reactivity in the marine environment (PRIME). These data were used to simulate the retrieval of two bio-optical variables from remotelysensed measurements of ocean colour (for example by the NASA Sea-viewing wide field-of-view sensor, SeaWiFS), using two-band semi-empirical algorithms. The variables considered were the diffuse attenuation coefficient at 490 nm, ( Kd(490), units: m -1) and the phytoplankton pigment concentration expressed as optically-weighted chlorophyll- a concentration [ Ca, units: mg m -3]. There was good agreement between the measured and the retrieved bio-optical values. Algorithms based on the PRIME data were generated to compare the performance of local algorithms (algorithms which apply to a restricted area and/or season) with global algorithms (algorithms developed on data from a wide variety of water masses). The use of local algorithms improved the average accuracy, but not the precision, of the retrievals: errors were still ±36% ( Kd) and ±117% ( Ca) using local algorithms.
Simulation results for the electron-cloud at the PSR
Furman, M.A.; Pivi, M.
2001-06-26
We present a first set of computer simulations for the main features of the electron cloud at the Proton Storage Ring (PSR), particularly its energy spectrum. We compare our results with recent measurements, which have been obtained by means of dedicated probes.
Dong, S.
2015-02-15
We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.
A novel coupling of noise reduction algorithms for particle flow simulations
NASA Astrophysics Data System (ADS)
Zimoń, M. J.; Reese, J. M.; Emerson, D. R.
2016-09-01
Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particle data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.
A novel coupling of noise reduction algorithms for particle flow simulations
Zimoń, M.J.; Reese, J.M.; Emerson, D.R.
2016-09-15
Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particle data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1989-01-01
The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.
Comparative Study of Algorithms for the Numerical Simulation of Lattice QCD
Luz, Fernando H. P.; Mendes, Tereza
2010-11-12
Large-scale numerical simulations are the prime method for a nonperturbative study of QCD from first principles. Although the lattice simulation of the pure-gauge (or quenched-QCD) case may be performed very efficiently on parallel machines, there are several additional difficulties in the simulation of the full-QCD case, i.e. when dynamical quark effects are taken into account. We discuss the main aspects of full-QCD simulations, describing the most common algorithms. We present a comparative analysis of performance for two versions of the hybrid Monte Carlo method (the so-called R and RHMC algorithms), as provided in the MILC software package. We consider two degenerate flavors of light quarks in the staggered formulation, having in mind the case of finite-temperature QCD.
Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm
NASA Technical Reports Server (NTRS)
Mitra, Sunanda; Pemmaraju, Surya
1992-01-01
Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.
A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.
Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel
2015-03-01
Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions.
Proposal of a brand-new gyrokinetic algorithm for global MHD simulation
NASA Astrophysics Data System (ADS)
Naitou, Hiroshi; Kobayashi, Kenichi; Hashimoto, Hiroki; Andachi, Takehisa; Lee, Wei-Li; Tokuda, Shinji; Yagi, Masatoshi
2009-11-01
A new algorithm for the gyrokinetic PIC code is proposed. The basic equations are energy conserving and composed of (1) the gyrokinetic Vlasov (GKV) equation, (2) the Vortex equation, and (3) the generalized Ohm's law along the magnetic field. Equation (2) is used to advance electrostatic potential in time. Equation (3) is used to advance longitudinal component of vector potential in time as well as estimating longitudinal induced electric field to accelerate charged particles. The particle information is used to estimate pressure terms in equation (3). The idea was obtained in the process of reviewing the split-weight-scheme formalism. This algorithm was incorporated in the Gpic-MHD code. Preliminary results for the m=1/n=1 internal kink mode simulation in the cylindrical geometry indicate good energy conservation, quite low noise due to particle discreteness, and applicability to larger spatial scale and higher beta regimes. The advantage of new Gpic-MHD is that the lower order moments of the GKV equation are estimated by the moment equation while the particle information is used to evaluate the second order moment.
NASA Astrophysics Data System (ADS)
Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.
2016-11-01
Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.
Xie, Lin; Cui, Xiaowei; Zhao, Sihao; Lu, Mingquan
2017-01-01
It is well known that multipath effect remains a dominant error source that affects the positioning accuracy of Global Navigation Satellite System (GNSS) receivers. Significant efforts have been made by researchers and receiver manufacturers to mitigate multipath error in the past decades. Recently, a multipath mitigation technique using dual-polarization antennas has become a research hotspot for it provides another degree of freedom to distinguish the line-of-sight (LOS) signal from the LOS and multipath composite signal without extensively increasing the complexity of the receiver. Numbers of multipath mitigation techniques using dual-polarization antennas have been proposed and all of them report performance improvement over the single-polarization methods. However, due to the unpredictability of multipath, multipath mitigation techniques based on dual-polarization are not always effective while few studies discuss the condition under which the multipath mitigation using a dual-polarization antenna can outperform that using a single-polarization antenna, which is a fundamental question for dual-polarization multipath mitigation (DPMM) and the design of multipath mitigation algorithms. In this paper we analyze the characteristics of the signal received by a dual-polarization antenna and use the maximum likelihood estimation (MLE) to assess the theoretical performance of DPMM in different received signal cases. Based on the assessment we answer this fundamental question and find the dual-polarization antenna’s capability in mitigating short delay multipath—the most challenging one among all types of multipath for the majority of the multipath mitigation techniques. Considering these effective conditions, we propose a dual-polarization sequential iterative maximum likelihood estimation (DP-SIMLE) algorithm for DPMM. The simulation results verify our theory and show superior performance of the proposed DP-SIMLE algorithm over the traditional one using only an
Xie, Lin; Cui, Xiaowei; Zhao, Sihao; Lu, Mingquan
2017-02-13
It is well known that multipath effect remains a dominant error source that affects the positioning accuracy of Global Navigation Satellite System (GNSS) receivers. Significant efforts have been made by researchers and receiver manufacturers to mitigate multipath error in the past decades. Recently, a multipath mitigation technique using dual-polarization antennas has become a research hotspot for it provides another degree of freedom to distinguish the line-of-sight (LOS) signal from the LOS and multipath composite signal without extensively increasing the complexity of the receiver. Numbers of multipath mitigation techniques using dual-polarization antennas have been proposed and all of them report performance improvement over the single-polarization methods. However, due to the unpredictability of multipath, multipath mitigation techniques based on dual-polarization are not always effective while few studies discuss the condition under which the multipath mitigation using a dual-polarization antenna can outperform that using a single-polarization antenna, which is a fundamental question for dual-polarization multipath mitigation (DPMM) and the design of multipath mitigation algorithms. In this paper we analyze the characteristics of the signal received by a dual-polarization antenna and use the maximum likelihood estimation (MLE) to assess the theoretical performance of DPMM in different received signal cases. Based on the assessment we answer this fundamental question and find the dual-polarization antenna's capability in mitigating short delay multipath-the most challenging one among all types of multipath for the majority of the multipath mitigation techniques. Considering these effective conditions, we propose a dual-polarization sequential iterative maximum likelihood estimation (DP-SIMLE) algorithm for DPMM. The simulation results verify our theory and show superior performance of the proposed DP-SIMLE algorithm over the traditional one using only an RHCP
Simulation of a navigator algorithm for a low-cost GPS receiver
NASA Technical Reports Server (NTRS)
Hodge, W. F.
1980-01-01
The analytical structure of an existing navigator algorithm for a low cost global positioning system receiver is described in detail to facilitate its implementation on in-house digital computers and real-time simulators. The material presented includes a simulation of GPS pseudorange measurements, based on a two-body representation of the NAVSTAR spacecraft orbits, and a four component model of the receiver bias errors. A simpler test for loss of pseudorange measurements due to spacecraft shielding is also noted.
Hierarchical tree algorithm for collisional N-body simulations on GRAPE
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki; Kawai, Atsushi
2016-06-01
We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional N-body simulations, running on the GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such a combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all the particle data and also the tree node data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on the host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct N2 summation in the original Hermite scheme. For example, we can achieve about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of an N = 106 system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme.
Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro
2014-09-01
Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors.
Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro
2014-01-01
Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383
ANOVA parameters influence in LCF experimental data and simulation results
NASA Astrophysics Data System (ADS)
Delprete, C.; Sesanaa, R.; Vercelli, A.
2010-06-01
The virtual design of components undergoing thermo mechanical fatigue (TMF) and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation) and the damage and life model (for life assessment). The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF) tests, low cycle fatigue (LCF) tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo structural FEM
First results of coupled IPS/NIMROD/GENRAY simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Kruger, S. E.; Held, E. D.; Harvey, R. W.; Elwasif, W. R.; Schnack, D. D.
2010-11-01
The Integrated Plasma Simulator (IPS) framework, developed by the SWIM Project Team, facilitates self-consistent simulations of complicated plasma behavior via the coupling of various codes modeling different spatial/temporal scales in the plasma. Here, we apply this capability to investigate the stabilization of tearing modes by ECCD. Under IPS control, the NIMROD code (MHD) evolves fluid equations to model bulk plasma behavior, while the GENRAY code (RF) calculates the self-consistent propagation and deposition of RF power in the resulting plasma profiles. GENRAY data is then used to construct moments of the quasilinear diffusion tensor (induced by the RF) which influence the dynamics of momentum/energy evolution in NIMROD's equations. We present initial results from these coupled simulations and demonstrate that they correctly capture the physics of magnetic island stabilization [Jenkins et al, PoP 17, 012502 (2010)] in the low-beta limit. We also discuss the process of code verification in these simulations, demonstrating good agreement between NIMROD and GENRAY predictions for the flux-surface-averaged, RF-induced currents. An overview of ongoing model development (synthetic diagnostics/plasma control systems; neoclassical effects; etc.) is also presented. Funded by US DoE.
Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos
2015-01-01
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685
Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm
Thanh, Vo Hong; Priami, Corrado
2015-08-07
We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reaction rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.
NASA Astrophysics Data System (ADS)
Shim, Yunsic; Amar, Jacques G.
2005-03-01
The standard kinetic Monte Carlo algorithm is an extremely efficient method to carry out serial simulations of dynamical processes such as thin film growth. However, in some cases it is necessary to study systems over extended time and length scales, and therefore a parallel algorithm is desired. Here we describe an efficient, semirigorous synchronous sublattice algorithm for parallel kinetic Monte Carlo simulations. The accuracy and parallel efficiency are studied as a function of diffusion rate, processor size, and number of processors for a variety of simple models of epitaxial growth. The effects of fluctuations on the parallel efficiency are also studied. Since only local communications are required, linear scaling behavior is observed, e.g., the parallel efficiency is independent of the number of processors for fixed processor size.
A novel algorithm for non-bonded-list updating in molecular simulations.
Maximova, Tatiana; Keasar, Chen
2006-06-01
Simulations of molecular systems typically handle interactions within non-bonded pairs. Generating and updating a list of these pairs can be the most time-consuming part of energy calculations for large systems. Thus, efficient non-bonded list processing can speed up the energy calculations significantly. While the asymptotic complexity of current algorithms (namely O(N), where N is the number of particles) is probably the lowest possible, a wide space for optimization is still left. This article offers a heuristic extension to the previously suggested grid based algorithms. We show that, when the average particle movements are slow, simulation time can be reduced considerably. The proposed algorithm has been implemented in the DistanceMatrix class of the molecular modeling package MESHI. MESHI is freely available at
Simulating the time-dependent Schr"odinger equation with a quantum lattice-gas algorithm
NASA Astrophysics Data System (ADS)
Prezkuta, Zachary; Coffey, Mark
2007-03-01
Quantum computing algorithms promise remarkable improvements in speed or memory for certain applications. Currently, the Type II (or hybrid) quantum computer is the most feasible to build. This consists of a large number of small Type I (pure) quantum computers that compute with quantum logic, but communicate with nearest neighbors in a classical way. The arrangement thus formed is suitable for computations that execute a quantum lattice gas algorithm (QLGA). We report QLGA simulations for both the linear and nonlinear time-dependent Schr"odinger equation. These evidence the stable, efficient, and at least second order convergent properties of the algorithm. The simulation capability provides a computational tool for applications in nonlinear optics, superconducting and superfluid materials, Bose-Einstein condensates, and elsewhere.
Golfing with protons: using research grade simulation algorithms for online games
NASA Astrophysics Data System (ADS)
Harold, J.
2004-12-01
Scientists have long known the power of simulations. By modeling a system in a computer, researchers can experiment at will, developing an intuitive sense of how a system behaves. The rapid increase in the power of personal computers, combined with technologies such as Flash, Shockwave and Java, allow us to bring research simulations into the education world by creating exploratory environments for the public. This approach is illustrated by a project funded by a small grant from NSF's Informal Science Education program, through an opportunity that provides education supplements to existing research awards. Using techniques adapted from a magnetospheric research program, several Flash based interactives have been developed that allow web site visitors to explore the motion of particles in the Earth's magnetosphere. These pieces were folded into a larger Space Weather Center web project at the Space Science Institute (www.spaceweathercenter.org). Rather than presenting these interactives as plasma simulations per se, the research algorithms were used to create games such as "Magneto Mini Golf", where the balls are protons moving in combined electric and magnetic fields. The "holes" increase in complexity, beginning with no fields and progressing towards a simple model of Earth's magnetosphere. The emphasis of the activity is gameplay, but because it is at its core a plasma simulation, the user develops an intuitive sense of charged particle motion as they progress. Meanwhile, the pieces contain embedded assessments that are measurable through a database driven tracking system. Mining that database not only provides helpful usability information, but allows us to examine whether users are meeting the learning goals of the activities. We will discuss the development and evaluation results of the project, as well as the potential for these types of activities to shift the expectations of what a web site can and should provide educationally.
SIMULATION OF AEROSOL DYNAMICS: A COMPARATIVE REVIEW OF ALGORITHMS USED IN AIR QUALITY MODELS
A comparative review of algorithms currently used in air quality models to simulate aerosol dynamics is presented. This review addresses coagulation, condensational growth, nucleation, and gas/particle mass transfer. Two major approaches are used in air quality models to repres...
Valentini, Paolo Schwartzentruber, Thomas E.
2009-12-10
A novel combined Event-Driven/Time-Driven (ED/TD) algorithm to speed-up the Molecular Dynamics simulation of rarefied gases using realistic spherically symmetric soft potentials is presented. Due to the low density regime, the proposed method correctly identifies the time that must elapse before the next interaction occurs, similarly to Event-Driven Molecular Dynamics. However, each interaction is treated using Time-Driven Molecular Dynamics, thereby integrating Newton's Second Law using the sufficiently small time step needed to correctly resolve the atomic motion. Although infrequent, many-body interactions are also accounted for with a small approximation. The combined ED/TD method is shown to correctly reproduce translational relaxation in argon, described using the Lennard-Jones potential. For densities between {rho}=10{sup -4}kg/m{sup 3} and {rho}=10{sup -1}kg/m{sup 3}, comparisons with kinetic theory, Direct Simulation Monte Carlo, and pure Time-Driven Molecular Dynamics demonstrate that the ED/TD algorithm correctly reproduces the proper collision rates and the evolution toward thermal equilibrium. Finally, the combined ED/TD algorithm is applied to the simulation of a Mach 9 shock wave in rarefied argon. Density and temperature profiles as well as molecular velocity distributions accurately match DSMC results, and the shock thickness is within the experimental uncertainty. For the problems considered, the ED/TD algorithm ranged from several hundred to several thousand times faster than conventional Time-Driven MD. Moreover, the force calculation to integrate the molecular trajectories is found to contribute a negligible amount to the overall ED/TD simulation time. Therefore, this method could pave the way for the application of much more refined and expensive interatomic potentials, either classical or first-principles, to Molecular Dynamics simulations of shock waves in rarefied gases, involving vibrational nonequilibrium and chemical reactivity.
Jürgens, Tim
2016-01-01
Frequency selectivity can be quantified using masking paradigms, such as psychophysical tuning curves (PTCs). Normal-hearing (NH) listeners show sharp PTCs that are level- and frequency-dependent, whereas frequency selectivity is strongly reduced in cochlear implant (CI) users. This study aims at (a) assessing individual shapes of PTCs in CI users, (b) comparing these shapes to those of simulated CI listeners (NH listeners hearing through a CI simulation), and (c) increasing the sharpness of PTCs using a biologically inspired dynamic compression algorithm, BioAid, which has been shown to sharpen the PTC shape in hearing-impaired listeners. A three-alternative-forced-choice forward-masking technique was used to assess PTCs in 8 CI users (with their own speech processor) and 11 NH listeners (with and without listening through a vocoder to simulate electric hearing). CI users showed flat PTCs with large interindividual variability in shape, whereas simulated CI listeners had PTCs of the same average flatness, but more homogeneous shapes across listeners. The algorithm BioAid was used to process the stimuli before entering the CI users’ speech processor or the vocoder simulation. This algorithm was able to partially restore frequency selectivity in both groups, particularly in seven out of eight CI users, meaning significantly sharper PTCs than in the unprocessed condition. The results indicate that algorithms can improve the large-scale sharpness of frequency selectivity in some CI users. This finding may be useful for the design of sound coding strategies particularly for situations in which high frequency selectivity is desired, such as for music perception. PMID:27604785
Windblown sand on Venus - Preliminary results of laboratory simulations
NASA Technical Reports Server (NTRS)
Greeley, R.; Iversen, J.; Leach, R.; Marshall, J.; Williams, S.; White, B.
1984-01-01
Small particles and winds of sufficient strength to move them have been detected from Venera and Pioneer-Venus data and suggest the existence of aeolian processes on Venus. The Venus wind tunnel (VWT) was fabricated in order to investigate the behavior of windblown particles in a simulated Venusian environment. Preliminary results show that sand-size material is readily entrained at the wind speeds detected on Venus and that saltating grains achieve velocities closely matching those of the wind. Measurements of saltation threshold and particle flux for various particle sizes have been compared with theoretical models which were developed by extrapolation of findings from Martian and terrestrial simulations. Results are in general agreement with theory, although certain discrepancies are apparent which may be attributed to experimental and/or theoretical-modeling procedures. Present findings enable a better understanding of Venusian surface processes and suggest that aeolian processes are important in the geological evolution of Venus.
ENTROPY PRODUCTION IN COLLISIONLESS SYSTEMS. III. RESULTS FROM SIMULATIONS
Barnes, Eric I.; Egerer, Colin P. E-mail: egerer.coli@uwlax.edu
2015-05-20
The equilibria formed by the self-gravitating, collisionless collapse of simple initial conditions have been investigated for decades. We present the results of our attempts to describe the equilibria formed in N-body simulations using thermodynamically motivated models. Previous work has suggested that it is possible to define distribution functions for such systems that describe maximum entropy states. These distribution functions are used to create radial density and velocity distributions for comparison to those from simulations. A wide variety of N-body code conditions are used to reduce the chance that results are biased by numerical issues. We find that a subset of initial conditions studied lead to equilibria that can be accurately described by these models, and that direct calculation of the entropy shows maximum values being achieved.
Key results from SB8 simulant flowsheet studies
Koopman, D. C.
2013-04-26
Key technically reviewed results are presented here in support of the Defense Waste Processing Facility (DWPF) acceptance of Sludge Batch 8 (SB8). This report summarizes results from simulant flowsheet studies of the DWPF Chemical Process Cell (CPC). Results include: Hydrogen generation rate for the Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) cycles of the CPC on a 6,000 gallon basis; Volume percent of nitrous oxide, N2O, produced during the SRAT cycle; Ammonium ion concentrations recovered from the SRAT and SME off-gas; and, Dried weight percent solids (insoluble, soluble, and total) measurements and density.
Comprehensive simulation of the middle atmospheric climate: some recent results
NASA Astrophysics Data System (ADS)
Hamilton, Kevin
1995-05-01
This study discusses the results of comprehensive time-dependent, three-dimensional numerical modelling of the circulation in the middle atmosphere obtained with the GFDL “SKYHI” troposphere-stratosphere-mesosphere general circulation model (GCM). The climate in a long control simulation with an intermediate resolution version (≈3° in horizontal) is briefly reviewed. While many aspects of the simulation are quite realistic, the focus in this study is on remaining first-order problems with the modelled middle atmospheric general circulation, notably the very cold high latitude temperatures in the Southern Hemisphere (SH) winter/spring, and the virtual absence of a quasi-biennial oscillation (QBO) in the tropical stratosphere. These problems are shared by other extant GCMs. It was noted that the SH cold pole problem is somewhat ameliorated with increasing horizontal resolution in the model. This suggests that improved resolution increases the vertical momentum fluxes from the explicitly resolved gravity waves in the model, a point confirmed by detailed analysis of the spectrum of vertical eddy momentum flux in the winter SH extratropics. This result inspired a series of experiments with the 3° SKYHI model modified by adding a prescribed zonally-symmetric zonal drag on the SH winter westerlies. The form of the imposed momentum source was based on the simple assumption that the mean flow drag produced by unresolved waves has a spatial distribution similar to that of the Eliassen-Palm flux divergence associated with explicitly resolved gravity waves. It was found that an appropriately-chosen drag confined to the top six model levels (above 0.35 mb) can lead to quite realistic simulations of the SH winter flow (including even the stationary wave fields) through August, but that problems still remain in the late-winter/springtime simulation. While the imposed momentum source was largely confined to the extratropics, it produced considerable improvement in the
Image Artifacts Resulting from Gamma-Ray Tracking Algorithms Used with Compton Imagers
Seifert, Carolyn E.; He, Zhong
2005-10-01
For Compton imaging it is necessary to determine the sequence of gamma-ray interactions in a single detector or array of detectors. This can be done by time-of-flight measurements if the interactions are sufficiently far apart. However, in small detectors the time between interactions can be too small to measure, and other means of gamma-ray sequencing must be used. In this work, several popular sequencing algorithms are reviewed for sequences with two observed events and three or more observed events in the detector. These algorithms can result in poor imaging resolution and introduce artifacts in the backprojection images. The effects of gamma-ray tracking algorithms on Compton imaging are explored in the context of the 4π Compton imager built by the University of Michigan.
On the near space population from simulation results
NASA Astrophysics Data System (ADS)
Tischenko, V. I.
A new computer technology module for studying meteoroid complexes is proposed. Space structure is represented by orbital fragments with their visualization from simulated cometary nucleus disintegration. The modelled section in the ecliptic is shown which presents the complex form and its inner structure. This representation can be used for analysing space filling to establish potentially dangerous regions near the complex and the concrete planet's orbits or other object routes. Main results for specific comets are given.
Improving Simulation-Based Algorithms for Fitting ERGMs
Hummel, Ruth M.; Hunter, David R.; Handcock, Mark S.
2015-01-01
Markov chain Monte Carlo methods can be used to approximate the intractable normalizing constants that arise in likelihood calculations for many exponential family random graph models for networks. However, in practice, the resulting approximations degrade as parameter values move away from the value used to define the Markov chain, even in cases where the chain produces perfectly efficient samples. We introduce a new approximation method along with a novel method of moving toward a maximum likelihood estimator (MLE) from an arbitrary starting parameter value in a series of steps based on alternating between the canonical exponential family parameterization and the mean-value parameterization. This technique enables us to find an approximate MLE in many cases where this was previously not possible. We illustrate these methods on a model for a transcriptional regulation network for E. coli, an example where previous attempts to approximate an MLE had failed, and a model for a well-known social network dataset involving friendships among workers in a tailor shop. These methods are implemented in the publicly available ergm package for R, and computer code to duplicate the results of this paper is included in the Supplemental Materials. PMID:26120266
NASA Astrophysics Data System (ADS)
Li, Jinghe; Song, Linping; Liu, Qing Huo
2016-02-01
A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.
Algorithm for Building a Spectrum for NREL's One-Sun Multi-Source Simulator: Preprint
Moriarty, T.; Emery, K.; Jablonski, J.
2012-06-01
Historically, the tools used at NREL to compensate for the difference between a reference spectrum and a simulator spectrum have been well-matched reference cells and the application of a calculated spectral mismatch correction factor, M. This paper describes the algorithm for adjusting the spectrum of a 9-channel fiber-optic-based solar simulator with a uniform beam size of 9 cm square at 1-sun. The combination of this algorithm and the One-Sun Multi-Source Simulator (OSMSS) hardware reduces NREL's current vs. voltage measurement time for a typical three-junction device from man-days to man-minutes. These time savings may be significantly greater for devices with more junctions.
NASA Technical Reports Server (NTRS)
Emmitt, G. D.; Wood, S. A.; Morris, M.
1990-01-01
Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.
Direct Dynamics Simulations using Hessian-based Predictor-corrector Integration Algorithms
Lourderaj, Upakarasamy; Song, Kihyung; Windus, Theresa L; Zhuang, Yu; Hase, William L
2007-01-29
The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. In previous research (J. Chem. Phys. 111, 3800 (1999)) a Hessian-based integration algorithm was derived for performing direct dynamics simulations. In the work presented here, improvements to this algorithm are described. The algorithm has a predictor step based on a local second-order Taylor expansion of the potential in Cartesian coordinates, within a trust radius, and a fifth-order correction to this predicted trajectory. The current algorithm determines the predicted trajectory in Cartesian coordinates, instead of the instantaneous normal mode coordinates used previously, to ensure angular momentum conservation. For the previous algorithm the corrected step was evaluated in rotated Cartesian coordinates. Since the local potential expanded in Cartesian coordinates is not invariant to rotation, the constants of motion are not necessarily conserved during the corrector step. An approximate correction to this shortcoming was made by projecting translation and rotation out of the rotated coordinates. For the current algorithm unrotated Cartesian coordinates are used for the corrected step to assure the constants of motion are conserved. An algorithm is proposed for updating the trust radius to enhance the accuracy and efficiency of the numerical integration. This modified Hessian-based integration algorithm, with its new components, has been implemented into the VENUS/NWChem software package and compared with the velocity-Verlet algorithm for the H₂CO→H₂+CO, O₃+C₃H₆, and F^{-}+CH₃OOH chemical reactions.
Continuum Level Results from Particle Simulations of Active Suspensions
NASA Astrophysics Data System (ADS)
Delmotte, Blaise; Climent, Eric; Plouraboue, Franck; Keaveny, Eric
2014-11-01
Accurately simulating active suspensions on the lab scale is a technical challenge. It requires considering large numbers of interacting swimmers with well described hydrodynamics in order to obtain representative and reliable statistics of suspension properties. We have developed a computationally scalable model based on an extension of the Force Coupling Method (FCM) to active particles. This tool can handle the many-body hydrodynamic interactions between O (105) swimmers while also accounting for finite-size effects, steady or time-dependent strokes, or variable swimmer aspect ratio. Results from our simulations of steady-stroke microswimmer suspensions coincide with those given by continuum models, but, in certain cases, we observe collective dynamics that these models do not predict. We provide robust statistics of resulting distributions and accurately characterize the growth rates of these instabilities. In addition, we explore the effect of the time-dependent stroke on the suspension properties, comparing with those from the steady-stroke simulations. Authors acknowledge the ANR project Motimo for funding and the Calmip computing centre for technical support.
Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir
2016-05-01
Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and ﬁxed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy.
Yang, Sheng; Guo, Li; Shao, Fang; Zhao, Yang; Chen, Feng
2015-01-01
Sequencing is widely used to discover associations between microRNAs (miRNAs) and diseases. However, the negative binomial distribution (NB) and high dimensionality of data obtained using sequencing can lead to low-power results and low reproducibility. Several statistical learning algorithms have been proposed to address sequencing data, and although evaluation of these methods is essential, such studies are relatively rare. The performance of seven feature selection (FS) algorithms, including baySeq, DESeq, edgeR, the rank sum test, lasso, particle swarm optimistic decision tree, and random forest (RF), was compared by simulation under different conditions based on the difference of the mean, the dispersion parameter of the NB, and the signal to noise ratio. Real data were used to evaluate the performance of RF, logistic regression, and support vector machine. Based on the simulation and real data, we discuss the behaviour of the FS and classification algorithms. The Apriori algorithm identified frequent item sets (mir-133a, mir-133b, mir-183, mir-937, and mir-96) from among the deregulated miRNAs of six datasets from The Cancer Genomics Atlas. Taking these findings altogether and considering computational memory requirements, we propose a strategy that combines edgeR and DESeq for large sample sizes.
Airflow Hazard Visualization for Helicopter Pilots: Flight Simulation Study Results
NASA Technical Reports Server (NTRS)
Aragon, Cecilia R.; Long, Kurtis R.
2005-01-01
Airflow hazards such as vortices or low level wind shear have been identified as a primary contributing factor in many helicopter accidents. US Navy ships generate airwakes over their decks, creating potentially hazardous conditions for shipboard rotorcraft launch and recovery. Recent sensor developments may enable the delivery of airwake data to the cockpit, where visualizing the hazard data may improve safety and possibly extend ship/helicopter operational envelopes. A prototype flight-deck airflow hazard visualization system was implemented on a high-fidelity rotorcraft flight dynamics simulator. Experienced helicopter pilots, including pilots from all five branches of the military, participated in a usability study of the system. Data was collected both objectively from the simulator and subjectively from post-test questionnaires. Results of the data analysis are presented, demonstrating a reduction in crash rate and other trends that illustrate the potential of airflow hazard visualization to improve flight safety.
The Geometric Cluster Algorithm: Rejection-Free Monte Carlo Simulation of Complex Fluids
NASA Astrophysics Data System (ADS)
Luijten, Erik
2005-03-01
The study of complex fluids is an area of intense research activity, in which exciting and counter-intuitive behavior continue to be uncovered. Ironically, one of the very factors responsible for such interesting properties, namely the presence of multiple relevant time and length scales, often greatly complicates accurate theoretical calculations and computer simulations that could explain the observations. We have recently developed a new Monte Carlo simulation methodootnotetextJ. Liu and E. Luijten, Phys. Rev. Lett.92, 035504 (2004); see also Physics Today, March 2004, pp. 25--27. that overcomes this problem for several classes of complex fluids. Our approach can accelerate simulations by orders of magnitude by introducing nonlocal, collective moves of the constituents. Strikingly, these cluster Monte Carlo moves are proposed in such a manner that the algorithm is rejection-free. The identification of the clusters is based upon geometric symmetries and can be considered as the off-latice generalization of the widely-used Swendsen--Wang and Wolff algorithms for lattice spin models. While phrased originally for complex fluids that are governed by the Boltzmann distribution, the geometric cluster algorithm can be used to efficiently sample configurations from an arbitrary underlying distribution function and may thus be applied in a variety of other areas. In addition, I will briefly discuss various extensions of the original algorithm, including methods to influence the size of the clusters that are generated and ways to introduce density fluctuations.
Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS
NASA Astrophysics Data System (ADS)
Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.
Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.
NASA Astrophysics Data System (ADS)
Wright, Jonathan W.
Experimental satellite attitude simulators have long been used to test and analyze control algorithms in order to drive down risk before implementation on an operational satellite. Ideally, the dynamic response of a terrestrial-based experimental satellite attitude simulator would be similar to that of an on-orbit satellite. Unfortunately, gravitational disturbance torques and poorly characterized moments of inertia introduce uncertainty into the system dynamics leading to questionable attitude control algorithm experimental results. This research consists of three distinct, but related contributions to the field of developing robust satellite attitude simulators. In the first part of this research, existing approaches to estimate mass moments and products of inertia are evaluated followed by a proposition and evaluation of a new approach that increases both the accuracy and precision of these estimates using typical on-board satellite sensors. Next, in order to better simulate the micro-torque environment of space, a new approach to mass balancing satellite attitude simulator is presented, experimentally evaluated, and verified. Finally, in the third area of research, we capitalize on the platform improvements to analyze a control moment gyroscope (CMG) singularity avoidance steering law. Several successful experiments were conducted with the CMG array at near-singular configurations. An evaluation process was implemented to verify that the platform remained near the desired test momentum, showing that the first two components of this research were effective in allowing us to conduct singularity avoidance experiments in a representative space-like test environment.
Circuit model of the ITER-like antenna for JET and simulation of its control algorithms
Durodié, Frédéric Křivská, Alena; Helou, Walid; Collaboration: EUROfusion Consortium
2015-12-10
The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and
Liebert, A; Wabnitz, H; Zołek, N; Macdonald, R
2008-08-18
We present an efficient Monte Carlo algorithm for simulation of time-resolved fluorescence in a layered turbid medium. It is based on the propagation of excitation and fluorescence photon bundles and the assumption of equal reduced scattering coefficients at the excitation and emission wavelengths. In addition to distributions of times of arrival of fluorescence photons at the detector, 3-D spatial generation probabilities were calculated. The algorithm was validated by comparison with the analytical solution of the diffusion equation for time-resolved fluorescence from a homogeneous semi-infinite turbid medium. It was applied to a two-layered model mimicking intra- and extracerebral compartments of the adult human head.
Electron-cloud updated simulation results for the PSR, and recent results for the SNS
Pivi, M.; Furman, M.A.
2002-05-29
Recent simulation results for the main features of the electron cloud in the storage ring of the Spallation Neutron Source (SNS) at Oak Ridge, and updated results for the Proton Storage Ring (PSR) at Los Alamos are presented in this paper. A refined model for the secondary emission process including the so called true secondary, rediffused and backscattered electrons has recently been included in the electron-cloud code.
Mori, Yoshiharu; Okumura, Hisashi
2015-12-05
Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm.
Modeling results for a linear simulator of a divertor
Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.
1993-06-23
A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach {approximately} 1 Gw/m{sup 2} along the magnetic fieldlines and > 10 MW/m{sup 2} on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report.
Control of Boolean networks: hardness results and algorithms for tree structured networks.
Akutsu, Tatsuya; Hayashida, Morihiro; Ching, Wai-Ki; Ng, Michael K
2007-02-21
Finding control strategies of cells is a challenging and important problem in the post-genomic era. This paper considers theoretical aspects of the control problem using the Boolean network (BN), which is a simplified model of genetic networks. It is shown that finding a control strategy leading to the desired global state is computationally intractable (NP-hard) in general. Furthermore, this hardness result is extended for BNs with considerably restricted network structures. These results justify existing exponential time algorithms for finding control strategies for probabilistic Boolean networks (PBNs). On the other hand, this paper shows that the control problem can be solved in polynomial time if the network has a tree structure. Then, this algorithm is extended for the case where the network has a few loops and the number of time steps is small. Though this paper focuses on theoretical aspects, biological implications of the theoretical results are also discussed.
Algorithm for simulation of quantum many-body dynamics using dynamical coarse-graining
Khasin, M.; Kosloff, R.
2010-04-15
An algorithm for simulation of quantum many-body dynamics having su(2) spectrum-generating algebra is developed. The algorithm is based on the idea of dynamical coarse-graining. The original unitary dynamics of the target observables--the elements of the spectrum-generating algebra--is simulated by a surrogate open-system dynamics, which can be interpreted as weak measurement of the target observables, performed on the evolving system. The open-system state can be represented by a mixture of pure states, localized in the phase space. The localization reduces the scaling of the computational resources with the Hilbert-space dimension n by factor n{sup 3/2}(ln n){sup -1} compared to conventional sparse-matrix methods. The guidelines for the choice of parameters for the simulation are presented and the scaling of the computational resources with the Hilbert-space dimension of the system is estimated. The algorithm is applied to the simulation of the dynamics of systems of 2x10{sup 4} and 2x10{sup 6} cold atoms in a double-well trap, described by the two-site Bose-Hubbard model.
Komarov, Ivan; D'Souza, Roshan M
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.
Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen Martin; Tucker, Garritt J.
2014-09-01
This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers
Earth resources mission performance studies. Volume 2: Simulation results
NASA Technical Reports Server (NTRS)
1974-01-01
Simulations were made at three month intervals to investigate the EOS mission performance over the four seasons of the year. The basic objectives of the study were: (1) to evaluate the ability of an EOS type system to meet a representative set of specific collection requirements, and (2) to understand the capabilities and limitations of the EOS that influence the system's ability to satisfy certain collection objectives. Although the results were obtained from a consideration of a two sensor EOS system, the analysis can be applied to any remote sensing system having similar optical and operational characteristics. While the category related results are applicable only to the specified requirement configuration, the results relating to general capability and limitations of the sensors can be applied in extrapolating to other U.S. based EOS collection requirements. The TRW general purpose mission simulator and analytic techniques discussed in this report can be applied to a wide range of collection and planning problems of earth orbiting imaging systems.
Multi-frequency Imaging Algorithms and Simulation of Space VLBI Using the VLA
NASA Astrophysics Data System (ADS)
Likhachev, S.; Kogan, L.; Fomalont, E.; Owen, F.
2009-08-01
New Multi-Frequency Synthesis (MFS) algorithms were developed and implemented in the Astro Space Locator (ASL) operating under MS Windows system. In November 2005 multi-frequency VLA observations of the radio source M87 were carried out at the following frequencies: 14.7, 15.2 21.3, 22.2, 23.0, and 23.4 GHz. We used the new MFS algorithms to determine the structure of M87 at the central frequency (19 GHz) and obtained both the image and spectral index map of the source. Comparison with more straight-forward imaging techniques (with single frequency images) shows that the new MFS algorithms increase the fidelity of the image by at least a factor of two and provides accurate spectral indices across the emission. Application to simulated Radioastron data is also shown.
Marsh, Rebeccah E; Riauka, Terence A; McQuarrie, Steve A
2007-01-01
Increasingly, fractals are being incorporated into pharmacokinetic models to describe transport and chemical kinetic processes occurring in confined and heterogeneous spaces. However, fractal compartmental models lead to differential equations with power-law time-dependent kinetic rate coefficients that currently are not accommodated by common commercial software programs. This paper describes a parameter optimization method for fitting individual pharmacokinetic curves based on a simulated annealing (SA) algorithm, which always converged towards the global minimum and was independent of the initial parameter values and parameter bounds. In a comparison using a classical compartmental model, similar fits by the Gauss-Newton and Nelder-Mead simplex algorithms required stringent initial estimates and ranges for the model parameters. The SA algorithm is ideal for fitting a wide variety of pharmacokinetic models to clinical data, especially those for which there is weak prior knowledge of the parameter values, such as the fractal models.
Ultrasonic noninvasive temperature estimation using echoshift gradient maps: simulation results.
Techavipoo, Udomchai; Chen, Quan; Varghese, Tomy
2005-07-01
Percutaneous ultrasound-image-guided radiofrequency (rf) ablation is an effective treatment for patients with hepatic malignancies that are excluded from surgical resection due to other complications. However, ablated regions are not clearly differentiated from normal untreated regions using conventional ultrasound imaging due to similar echogenic tissue properties. In this paper, we investigate the statistics that govern the relationship between temperature elevation and the corresponding temperature map obtained from the gradient of the echoshifts obtained using consecutive ultrasound radiofrequency signals. A relationship derived using experimental data on the sound speed and tissue expansion variations measured on canine liver tissue samples at different elevated temperatures is utilized to generate ultrasound radiofrequency simulated data. The simulated data set is then utilized to statistically estimate the accuracy and precision of the temperature distributions obtained. The results show that temperature increases between 37 and 67 degrees C can be estimated with standard deviations of +/- 3 degrees C. Our results also indicate that the correlation coefficient between consecutive radiofrequency signals should be greater than 0.85 to obtain accurate temperature estimates.
NASA Astrophysics Data System (ADS)
Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.
2015-02-01
Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.
Preliminary Benchmarking Efforts and MCNP Simulation Results for Homeland Security
Robert Hayes
2008-04-18
It is shown in this work that basic measurements made from well defined source detector configurations can be readily converted in to benchmark quality results by which Monte Carlo N-Particle (MCNP) input stacks can be validated. Specifically, a recent measurement made in support of national security at the Nevada Test Site (NTS) is described with sufficient detail to be submitted to the American Nuclear Society’s (ANS) Joint Benchmark Committee (JBC) for consideration as a radiation measurement benchmark. From this very basic measurement, MCNP input stacks are generated and validated both in predicted signal amplitude and spectral shape. Not modeled at this time are those perturbations from the more recent pulse height light (PHL) tally feature, although what spectral deviations are seen can be largely attributed to not including this small correction. The value of this work is as a proof-of-concept demonstration that with well documented historical testing can be converted into formal radiation measurement benchmarks. This effort would support virtual testing of algorithms and new detector configurations.
Initial Evaluations of LoC Prediction Algorithms Using the NASA Vertical Motion Simulator
NASA Technical Reports Server (NTRS)
Krishnakumar, Kalmanje; Stepanyan, Vahram; Barlow, Jonathan; Hardy, Gordon; Dorais, Greg; Poolla, Chaitanya; Reardon, Scott; Soloway, Donald
2014-01-01
Flying near the edge of the safe operating envelope is an inherently unsafe proposition. Edge of the envelope here implies that small changes or disturbances in system state or system dynamics can take the system out of the safe envelope in a short time and could result in loss-of-control events. This study evaluated approaches to predicting loss-of-control safety margins as the aircraft gets closer to the edge of the safe operating envelope. The goal of the approach is to provide the pilot aural, visual, and tactile cues focused on maintaining the pilot's control action within predicted loss-of-control boundaries. Our predictive architecture combines quantitative loss-of-control boundaries, an adaptive prediction method to estimate in real-time Markov model parameters and associated stability margins, and a real-time data-based predictive control margins estimation algorithm. The combined architecture is applied to a nonlinear transport class aircraft. Evaluations of various feedback cues using both test and commercial pilots in the NASA Ames Vertical Motion-base Simulator (VMS) were conducted in the summer of 2013. The paper presents results of this evaluation focused on effectiveness of these approaches and the cues in preventing the pilots from entering a loss-of-control event.
New exclusive CHIPS-TPT algorithms for simulation of neutron-nuclear reactions
NASA Astrophysics Data System (ADS)
Kosov, M.; Savin, D.
2015-05-01
The CHIPS-TPT physics library for simulation of neutron-nuclear reactions on the new exclusive level is being developed in CFAR VNIIA. The exclusive modeling conserves energy, momentum and quantum numbers in each neutron-nuclear interaction. The CHIPS-TPT algorithms are based on the exclusive CHIPS library, which is compatible with Geant4. Special CHIPS-TPT physics lists in the Geant4 format are provided. The calculation time for an exclusive CHIPS-TPT simulation is comparable to the time of the corresponding Geant4- HP simulation. In addition to the reduction of the deposited energy fluctuations, which is a consequence of the energy conservation, the CHIPS-TPT libraries provide a possibility of simulation of the secondary particles correlation, e.g. secondary gammas, and of the Doppler broadening of gamma lines in the spectrum, which can be measured by germanium detectors.
A Simulated Annealing Algorithm for the Optimization of Multistage Depressed Collector Efficiency
NASA Technical Reports Server (NTRS)
Vaden, Karl R.; Wilson, Jeffrey D.; Bulson, Brian A.
2002-01-01
The microwave traveling wave tube amplifier (TWTA) is widely used as a high-power transmitting source for space and airborne communications. One critical factor in designing a TWTA is the overall efficiency. However, overall efficiency is highly dependent upon collector efficiency; so collector design is critical to the performance of a TWTA. Therefore, NASA Glenn Research Center has developed an optimization algorithm based on Simulated Annealing to quickly design highly efficient multi-stage depressed collectors (MDC).
A Fourier analysis for a fast simulation algorithm. [for switching converters
NASA Technical Reports Server (NTRS)
King, Roger J.
1988-01-01
This paper presents a derivation of compact expressions for the Fourier series analysis of the steady-state solution of a typical switching converter. The modeling procedure for the simulation and the steady-state solution is described, and some desirable traits for its matrix exponential subroutine are discussed. The Fourier analysis algorithm was tested on a phase-controlled parallel-loaded resonant converter, providing an experimental confirmation.
NASA Technical Reports Server (NTRS)
Carrier, Alain C.; Aubrun, Jean-Noel
1993-01-01
New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.
NASA Astrophysics Data System (ADS)
Tichý, Vladimír; Hudec, René; Němcová, Šárka
2016-06-01
The algorithm presented is intended mainly for lobster eye optics. This type of optics (and some similar types) allows for a simplification of the classical ray-tracing procedure that requires great many rays to simulate. The method presented performs the simulation of a only few rays; therefore it is extremely effective. Moreover, to simplify the equations, a specific mathematical formalism is used. Only a few simple equations are used, therefore the program code can be simple as well. The paper also outlines how to apply the method to some other reflective optical systems.
A novel Monte Carlo algorithm for simulating crystals with McStas
NASA Astrophysics Data System (ADS)
Alianelli, L.; Sánchez del Río, M.; Felici, R.; Andersen, K. H.; Farhi, E.
2004-07-01
We developed an original Monte Carlo algorithm for the simulation of Bragg diffraction by mosaic, bent and gradient crystals. It has practical applications, as it can be used for simulating imperfect crystals (monochromators, analyzers and perhaps samples) in neutron ray-tracing packages, like McStas. The code we describe here provides a detailed description of the particle interaction with the microscopic homogeneous regions composing the crystal, therefore it can be used also for the calculation of quantities having a conceptual interest, as multiple scattering, or for the interpretation of experiments aiming at characterizing crystals, like diffraction topographs.
Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei
2016-01-01
We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the “0” and “1” elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements. PMID:27034110
A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.
2005-01-01
This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.
Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei
2016-04-01
We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the "0" and "1" elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements.
Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project
NASA Astrophysics Data System (ADS)
Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego
2015-04-01
Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal
Some results on ethnic conflicts based on evolutionary game simulation
NASA Astrophysics Data System (ADS)
Qin, Jun; Yi, Yunfei; Wu, Hongrun; Liu, Yuhang; Tong, Xiaonian; Zheng, Bojin
2014-07-01
The force of the ethnic separatism, essentially originating from the negative effect of ethnic identity, is damaging the stability and harmony of multiethnic countries. In order to eliminate the foundation of the ethnic separatism and set up a harmonious ethnic relationship, some scholars have proposed a viewpoint: ethnic harmony could be promoted by popularizing civic identity. However, this viewpoint is discussed only from a philosophical prospective and still lacks support of scientific evidences. Because ethnic group and ethnic identity are products of evolution and ethnic identity is the parochialism strategy under the perspective of game theory, this paper proposes an evolutionary game simulation model to study the relationship between civic identity and ethnic conflict based on evolutionary game theory. The simulation results indicate that: (1) the ratio of individuals with civic identity has a negative association with the frequency of ethnic conflicts; (2) ethnic conflict will not die out by killing all ethnic members once for all, and it also cannot be reduced by a forcible pressure, i.e., increasing the ratio of individuals with civic identity; (3) the average frequencies of conflicts can stay in a low level by promoting civic identity periodically and persistently.
A treatment algorithm for patients with large skull bone defects and first results.
Lethaus, Bernd; Ter Laak, Marielle Poort; Laeven, Paul; Beerens, Maikel; Koper, David; Poukens, Jules; Kessler, Peter
2011-09-01
Large skull bone defects resulting from craniotomies due to cerebral insults, trauma or tumours create functional and aesthetic disturbances to the patient. The reconstruction of large osseous defects is still challenging. A treatment algorithm is presented based on the close interaction of radiologists, computer engineers and cranio-maxillofacial surgeons. From 2004 until today twelve consecutive patients have been operated on successfully according to this treatment plan. Titanium and polyetheretherketone (PEEK) were used to manufacture the implants. The treatment algorithm is proved to be reliable. No corrections had to be performed either to the skull bone or to the implant. Short operations and hospitalization periods are essential prerequisites for treatment success and justify the high expenses.
Novascone, S. R.; Spencer, B. W.; Andrs, D.; Williamson, R. L.; Hales, J. D.; Perez, D. M.
2013-07-01
The behavior of nuclear fuel in the reactor environment is affected by multiple physics, most notably heat conduction and solid mechanics, which can have a strong influence on each other. To provide credible solutions, a fuel performance simulation code must have the ability to obtain solutions for each of the physics, including coupling between them. Solution strategies for solving systems of coupled equations can be categorized as loosely-coupled, where the individual physics are solved separately, keeping the solutions for the other physics fixed at each iteration, or tightly coupled, where the nonlinear solver simultaneously drives down the residual for each physics, taking into account the coupling between the physics in each nonlinear iteration. In this paper, we compare the performance of loosely and tightly coupled solution algorithms for thermomechanical problems involving coupled thermal and mechanical contact, which is a primary source of interdependence between thermal and mechanical solutions in fuel performance models. The results indicate that loosely-coupled simulations require significantly more nonlinear iterations, and may lead to convergence trouble when the thermal conductivity of the gap is too small. We also apply the tightly coupled solution strategy to a nuclear fuel simulation of an experiment in a test reactor. Studying the results from these simulations indicates that perhaps convergence for either approach may be problem dependent, i.e., there may be problems for which a loose coupled approach converges, where tightly coupled won't converge and vice versa. (authors)
S. R. Novascone; B. W. Spencer; D. Andrs; R. L. Williamson; J. D. Hales; D. M. Perez
2013-05-01
The behavior of nuclear fuel in the reactor environment is affected by multiple physics, most notably heat conduction and solid mechanics, which can have a strong influence on each other. To provide credible solutions, a fuel performance simulation code must have the ability to obtain solutions for each of the physics, including coupling between them. Solution strategies for solving systems of coupled equations can be categorized as loosely-coupled, where the individual physics are solved separately, keeping the solutions for the other physics fixed at each iteration, or tightly coupled, where the nonlinear solver simultaneously drives down the residual for each physics, taking into account the coupling between the physics in each nonlinear iteration. In this paper, we compare the performance of loosely and tightly coupled solution algorithms for thermomechanical problems involving coupled thermal and mechanical contact, which is a primary source of interdependence between thermal and mechanical solutions in fuel performance models. The results indicate that loosely-coupled simulations require significantly more nonlinear iterations, and may lead to convergence trouble when the thermal conductivity of the gap is too small. We also apply the tightly coupled solution strategy to a nuclear fuel simulation of an experiment in a test reactor. Studying the results from these simulations indicates that perhaps convergence for either approach may be problem dependent, i.e., there may be problems for which a loose coupled approach converges, where tightly coupled won’t converge and vice versa.
Aeolian abrasion on Venus: Preliminary results from the Venus simulator
NASA Technical Reports Server (NTRS)
Marshall, J. R.; Greeley, Ronald; Tucker, D. W.; Pollack, J. B.
1987-01-01
The role of atmospheric pressure on aeolian abrasion was examined in the Venus Simulator with a constant temperature of 737 K. Both the rock target and the impactor were fine-grained basalt. The impactor was a 3 mm diameter angular particle chosen to represent a size of material that is entrainable by the dense Venusian atmosphere and potentially abrasive by virtue of its mass. It was projected at the target 10 to the 5 power times at a velocity of 0.7 m/s. The impactor showed a weight loss of approximately 1.2 x 10 to the -9 power gm per impact with the attrition occurring only at the edges. Results from scanning electron microscope analysis, profilometry, and weight measurement are summarized. It is concluded that particles can incur abrasion at Venusian temperatures even with low impact velocities expected for Venus.
SLAC E144 Plots, Simulation Results, and Data
The 1997 E144 experiments at the Stanford Linear Accelerator Center (SLAC) utilitized extremely high laser intensities and collided huge groups of photons together so violently that positron-electron pairs were briefly created, actual particles of matter and antimatter. Instead of matter exploding into heat and light, light actually become matter. That accomplishment opened a new path into the exploration of the interactions of electrons and photons or quantum electrodynamics (QED). The E144 information at this website includes Feynmann Diagrams, simulation results, and data files. See also aseries of frames showing the E144 laser colliding with a beam electron and producing an electron-positron pair at http://www.slac.stanford.edu/exp/e144/focpic/focpic.html and lists of collaborators' papers, theses, and a page of press articles.
Governance of complex systems: results of a sociological simulation experiment.
Adelt, Fabian; Weyer, Johannes; Fink, Robin D
2014-01-01
Social sciences have discussed the governance of complex systems for a long time. The following paper tackles the issue by means of experimental sociology, in order to investigate the performance of different modes of governance empirically. The simulation framework developed is based on Esser's model of sociological explanation as well as on Kroneberg's model of frame selection. The performance of governance has been measured by means of three macro and two micro indicators. Surprisingly, central control mostly performs better than decentralised coordination. However, results not only depend on the mode of governance, but there is also a relation between performance and the composition of actor populations, which has yet not been investigated sufficiently. Practitioner Summary: Practitioners can gain insights into the functioning of complex systems and learn how to better manage them. Additionally, they are provided with indicators to measure the performance of complex systems.
NASA Technical Reports Server (NTRS)
Jain, A.; Man, G. K.
1993-01-01
This paper describes the Dynamics Algorithms for Real-Time Simulation (DARTS) real-time hardware-in-the-loop dynamics simulator for the National Aeronautics and Space Administration's Cassini spacecraft. The spacecraft model consists of a central flexible body with a number of articulated rigid-body appendages. The demanding performance requirements from the spacecraft control system require the use of a high fidelity simulator for control system design and testing. The DARTS algorithm provides a new algorithmic and hardware approach to the solution of this hardware-in-the-loop simulation problem. It is based upon the efficient spatial algebra dynamics for flexible multibody systems. A parallel and vectorized version of this algorithm is implemented on a low-cost, multiprocessor computer to meet the simulation timing requirements.
Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.
Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F
2016-01-01
In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed.
Advanced Discontinuous Galerkin Algorithms and First Open-Field Line Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hammett, G. W.; Hakim, A.; Shi, E. L.
2016-10-01
New versions of Discontinuous Galerkin (DG) algorithms have interesting features that may help with challenging problems of higher-dimensional kinetic problems. We are developing the gyrokinetic code Gkeyll based on DG. DG also has features that may help with the next generation of Exascale computers. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communications costs (which are a bottleneck at exascale). DG uses efficient Gaussian quadrature like finite elements, but keeps the calculation local for the kinetic solver, also reducing communication. Sparse grid methods might further reduce the cost significantly in higher dimensions. The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature as used in popular δf gyrokinetic codes. Consistent basis functions avoid high-frequency numerical modes from electromagnetic terms. We will show our first results of 3 x + 2 v simulations of open-field line/SOL turbulence in a simple helical geometry (like Helimak/TORPEX), with parameters from LAPD, TORPEX, and NSTX. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.
Swiler, Laura Painton; Eldred, Michael Scott
2009-09-01
This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.
NASA Astrophysics Data System (ADS)
Zemlyanaya, E. V.; Bashashin, M. V.; Rahmonov, I. R.; Shukrinov, Yu. M.; Atanasova, P. Kh.; Volokhova, A. V.
2016-10-01
We consider a model of system of long Josephson junctions (LJJ) with inductive and capacitive coupling. Corresponding system of nonlinear partial differential equations is solved by means of the standard three-point finite-difference approximation in the spatial coordinate and utilizing the Runge-Kutta method for solution of the resulting Cauchy problem. A parallel algorithm is developed and implemented on a basis of the MPI (Message Passing Interface) technology. Effect of the coupling between the JJs on the properties of LJJ system is demonstrated. Numerical results are discussed from the viewpoint of effectiveness of parallel implementation.
Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results
NASA Technical Reports Server (NTRS)
Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.
2009-01-01
During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.
Waanders, Bart Van Bloemen
2006-01-01
Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.
Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A
2015-06-01
Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.
Basconi, Joseph E; Shirts, Michael R
2013-07-09
Temperature control algorithms in molecular dynamics (MD) simulations are necessary to study isothermal systems. However, these thermostatting algorithms alter the velocities of the particles and thus modify the dynamics of the system with respect to the microcanonical ensemble, which could potentially lead to thermostat-dependent dynamical artifacts. In this study, we investigate how six well-established thermostat algorithms applied with different coupling strengths and to different degrees of freedom affect the dynamics of various molecular systems. We consider dynamic processes occurring on different times scales by measuring translational and rotational self-diffusion as well as the shear viscosity of water, diffusion of a small molecule solvated in water, and diffusion and the dynamic structure factor of a polymer chain in water. All of these properties are significantly dampened by thermostat algorithms which randomize particle velocities, such as the Andersen thermostat and Langevin dynamics, when strong coupling is used. For the solvated small molecule and polymer, these dampening effects are reduced somewhat if the thermostats are applied to the solvent alone, such that the solute's temperature is maintained only through thermal contact with solvent particles. Algorithms which operate by scaling the velocities, such as the Berendsen thermostat, the stochastic velocity rescaling approach of Bussi and co-workers, and the Nosé-Hoover thermostat, yield transport properties that are statistically indistinguishable from those of the microcanonical ensemble, provided they are applied globally, i.e. coupled to the system's kinetic energy. When coupled to local kinetic energies, a velocity scaling thermostat can have dampening effects comparable to a velocity randomizing method, as we observe when a massive Nose-Hoover coupling scheme is used to simulate water. Correct dynamical properties, at least those studied in this paper, are obtained with the Berendsen
Drawert, Brian; Lawson, Michael J.; Petzold, Linda; Khammash, Mustafa
2010-01-01
We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm. PMID:20170209
Parallel-vector algorithms for particle simulations on shared-memory multiprocessors
Nishiura, Daisuke; Sakaguchi, Hide
2011-03-01
Over the last few decades, the computational demands of massive particle-based simulations for both scientific and industrial purposes have been continuously increasing. Hence, considerable efforts are being made to develop parallel computing techniques on various platforms. In such simulations, particles freely move within a given space, and so on a distributed-memory system, load balancing, i.e., assigning an equal number of particles to each processor, is not guaranteed. However, shared-memory systems achieve better load balancing for particle models, but suffer from the intrinsic drawback of memory access competition, particularly during (1) paring of contact candidates from among neighboring particles and (2) force summation for each particle. Here, novel algorithms are proposed to overcome these two problems. For the first problem, the key is a pre-conditioning process during which particle labels are sorted by a cell label in the domain to which the particles belong. Then, a list of contact candidates is constructed by pairing the sorted particle labels. For the latter problem, a table comprising the list indexes of the contact candidate pairs is created and used to sum the contact forces acting on each particle for all contacts according to Newton's third law. With just these methods, memory access competition is avoided without additional redundant procedures. The parallel efficiency and compatibility of these two algorithms were evaluated in discrete element method (DEM) simulations on four types of shared-memory parallel computers: a multicore multiprocessor computer, scalar supercomputer, vector supercomputer, and graphics processing unit. The computational efficiency of a DEM code was found to be drastically improved with our algorithms on all but the scalar supercomputer. Thus, the developed parallel algorithms are useful on shared-memory parallel computers with sufficient memory bandwidth.
1991-07-01
MUSIC ALGORITHM (U) by L.E. Montbrland go I July 1991 CRC REPORT NO. 1438 Ottawa I* Government of Canada Gouvsrnweient du Canada I o DParunnt of...FINDING RESULTS FROM AN FFT PEAK IDENTIFICATION TECHNIQUE WITH THOSE FROM THE MUSIC ALGORITHM (U) by L.E. Montbhrand CRC REPORT NO. 1438 July 1991...Ottawa A Comparison of Direction Finding Results From an FFT Peak Identification Technique With Those From the Music Algorithm L.E. Montbriand Abstract A
New simulation and measurement results on gateable DEPFET devices
NASA Astrophysics Data System (ADS)
Bähr, Alexander; Aschauer, Stefan; Hermenau, Katrin; Herrmann, Sven; Lechner, Peter H.; Lutz, Gerhard; Majewski, Petra; Miessner, Danilo; Porro, Matteo; Richter, Rainer H.; Schaller, Gerhard; Sandow, Christian; Schnecke, Martina; Schopper, Florian; Stefanescu, Alexander; Strüder, Lothar; Treis, Johannes
2012-07-01
To improve the signal to noise level, devices for optical and x-ray astronomy use techniques to suppress background events. Well known examples are e.g. shutters or frame-store Charge Coupled Devices (CCDs). Based on the DEpleted P-channel Field Effect Transistor (DEPFET) principle a so-called Gatebale DEPFET detector can be built. Those devices combine the DEPFET principle with a fast built-in electronic shutter usable for optical and x-ray applications. The DEPFET itself is the basic cell of an active pixel sensor build on a fully depleted bulk. It combines internal amplification, readout on demand, analog storage of the signal charge and a low readout noise with full sensitivity over the whole bulk thickness. A Gatebale DEPFET has all these benefits and obviates the need for an external shutter. Two concepts of Gatebale DEPFET layouts providing a built-in shutter will be introduced. Furthermore proof of principle measurements for both concepts are presented. Using recently produced prototypes a shielding of the collection anode up to 1 • 10-4 was achieved. Predicted by simulations, an optimized geometry should result in values of 1 • 10-5 and better. With the switching electronic currently in use a timing evaluation of the shutter opening and closing resulted in rise and fall times of 100ns.
An assessment of coupling algorithms for nuclear reactor core physics simulations
Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C.T.; Evans, Thomas; Philip, Bobby
2016-04-15
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.
An assessment of coupling algorithms for nuclear reactor core physics simulations
Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...
2016-02-06
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less
An assessment of coupling algorithms for nuclear reactor core physics simulations
Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby
2016-04-01
Here we evaluate the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product was developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Finally, both criticality (k-eigenvalue) and critical boron search problems are considered.
An assessment of coupling algorithms for nuclear reactor core physics simulations
Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...
2016-04-01
Here we evaluate the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product was developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK andmore » Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Finally, both criticality (k-eigenvalue) and critical boron search problems are considered.« less
An assessment of coupling algorithms for nuclear reactor core physics simulations
NASA Astrophysics Data System (ADS)
Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby
2016-04-01
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss-Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton-Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.
An assessment of coupling algorithms for nuclear reactor core physics simulations
Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby
2016-02-06
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.
1979-01-01
The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.
Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.
Exclusive CHIPS-TPT algorithms for simulation of neutron-nuclear reactions
NASA Astrophysics Data System (ADS)
Kosov, Mikhail; Savin, Dmitriy
2016-09-01
The CHIPS-TPT physics library for simulation of neutron-nuclear reactions on the new exclusive level is being developed in CFAR VNIIA. The exclusive modeling conserves energy, momentum and quantum numbers in each neutron-nuclear interaction. The CHIPS-TPT algorithms are based on the exclusive CHIPS library, which is compatible with Geant4. Special CHIPS-TPT physics lists in the Geant4 format are provided. The calculation time for an exclusive CHIPS-TPT simulation is comparable to the time of the corresponding inclusive Geant4-HP simulation and much faster for mono-isotopic simulations. In addition to the reduction of the deposited energy fluctuations, which is a consequence of the energy conservation, the CHIPS-TPT libraries provide a possibility of simulation of the secondary particles correlation, e.g. secondary gammas or n-γ correlations, and of the Doppler broadening of the γ-lines in the simulated spectra, which can be measured by germanium detectors.
NASA Astrophysics Data System (ADS)
Lin, K.-M.; Hu, M.-H.; Hung, C.-T.; Wu, J.-S.; Hwang, F.-N.; Chen, Y.-S.; Cheng, G.
2012-12-01
Development of a hybrid numerical algorithm which couples weakly with the gas flow model (GFM) and the plasma fluid model (PFM) for simulating an atmospheric-pressure plasma jet (APPJ) and its acceleration by two approaches is presented. The weak coupling between gas flow and discharge is introduced by transferring between the results obtained from the steady-state solution of the GFM and cycle-averaged solution of the PFM respectively. Approaches of reducing the overall runtime include parallel computing of the GFM and the PFM solvers, and employing a temporal multi-scale method (TMSM) for PFM. Parallel computing of both solvers is realized using the domain decomposition method with the message passing interface (MPI) on distributed-memory machines. The TMSM considers only chemical reactions by ignoring the transport terms when integrating temporally the continuity equations of heavy species at each time step, and then the transport terms are restored only at an interval of time marching steps. The total reduction of runtime is 47% by applying the TMSM to the APPJ example presented in this study. Application of the proposed hybrid algorithm is demonstrated by simulating a parallel-plate helium APPJ impinging onto a substrate, which the cycle-averaged properties of the 200th cycle are presented. The distribution patterns of species densities are strongly correlated by the background gas flow pattern, which shows that consideration of gas flow in APPJ simulations is critical.
ERIC Educational Resources Information Center
Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin
2007-01-01
Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…
Langmuir Wave Decay in Inhomogeneous Solar Wind Plasmas: Simulation Results
NASA Astrophysics Data System (ADS)
Krafft, C.; Volokitin, A. S.; Krasnoselskikh, V. V.
2015-08-01
Langmuir turbulence excited by electron flows in solar wind plasmas is studied on the basis of numerical simulations. In particular, nonlinear wave decay processes involving ion-sound (IS) waves are considered in order to understand their dependence on external long-wavelength plasma density fluctuations. In the presence of inhomogeneities, it is shown that the decay processes are localized in space and, due to the differences between the group velocities of Langmuir and IS waves, their duration is limited so that a full nonlinear saturation cannot be achieved. The reflection and the scattering of Langmuir wave packets on the ambient and randomly varying density fluctuations lead to crucial effects impacting the development of the IS wave spectrum. Notably, beatings between forward propagating Langmuir waves and reflected ones result in the parametric generation of waves of noticeable amplitudes and in the amplification of IS waves. These processes, repeated at different space locations, form a series of cascades of wave energy transfer, similar to those studied in the frame of weak turbulence theory. The dynamics of such a cascading mechanism and its influence on the acceleration of the most energetic part of the electron beam are studied. Finally, the role of the decay processes in the shaping of the profiles of the Langmuir wave packets is discussed, and the waveforms calculated are compared with those observed recently on board the spacecraft Solar TErrestrial RElations Observatory and WIND.
AGGREGATES: Finding structures in simulation results of solutions.
Bernardes, Carlos E S
2017-04-15
Molecular Dynamic and Monte-Carlo simulations are widely used to investigate the structure and physical properties of solids and liquids at a molecular level. Tools to extract the most relevant information from the obtained results are, however, in considerable demand. One such tool, the program AGGREGATES, is described in this work. Based on distance criteria, the program searches trajectory files for the presence of molecular clusters and computes several statistical and shape properties for these structures. Tools designed to investigate the local organization and the molecular conformations in the clusters are also available. Among these, it is introduced a new approach to perform a First Shell Analysis, by looking for the presence of atomic contacts between molecules. These elements are particularly useful to obtain information on molecular assembly processes (such as the nucleation of crystals or colloidal particles) or to investigate polymorphism in organic compounds. The program features are illustrated here through the investigation of the 4'-hydroxyacetophenone + ethanol system. © 2017 Wiley Periodicals, Inc.
LANGMUIR WAVE DECAY IN INHOMOGENEOUS SOLAR WIND PLASMAS: SIMULATION RESULTS
Krafft, C.; Volokitin, A. S.; Krasnoselskikh, V. V.
2015-08-20
Langmuir turbulence excited by electron flows in solar wind plasmas is studied on the basis of numerical simulations. In particular, nonlinear wave decay processes involving ion-sound (IS) waves are considered in order to understand their dependence on external long-wavelength plasma density fluctuations. In the presence of inhomogeneities, it is shown that the decay processes are localized in space and, due to the differences between the group velocities of Langmuir and IS waves, their duration is limited so that a full nonlinear saturation cannot be achieved. The reflection and the scattering of Langmuir wave packets on the ambient and randomly varying density fluctuations lead to crucial effects impacting the development of the IS wave spectrum. Notably, beatings between forward propagating Langmuir waves and reflected ones result in the parametric generation of waves of noticeable amplitudes and in the amplification of IS waves. These processes, repeated at different space locations, form a series of cascades of wave energy transfer, similar to those studied in the frame of weak turbulence theory. The dynamics of such a cascading mechanism and its influence on the acceleration of the most energetic part of the electron beam are studied. Finally, the role of the decay processes in the shaping of the profiles of the Langmuir wave packets is discussed, and the waveforms calculated are compared with those observed recently on board the spacecraft Solar TErrestrial RElations Observatory and WIND.
Moučka, Filip; Nezbeda, Ivo; Smith, William R
2013-09-28
This paper deals with molecular simulation of the chemical potentials in aqueous electrolyte solutions for the water solvent and its relationship to chemical potential simulation results for the electrolyte solute. We use the Gibbs-Duhem equation linking the concentration dependence of these quantities to test the thermodynamic consistency of separate calculations of each quantity. We consider aqueous NaCl solutions at ambient conditions, using the standard SPC/E force field for water and the Joung-Cheatham force field for the electrolyte. We calculate the water chemical potential using the osmotic ensemble Monte Carlo algorithm by varying the number of water molecules at a constant amount of solute. We demonstrate numerical consistency of these results in terms of the Gibbs-Duhem equation in conjunction with our previous calculations of the electrolyte chemical potential. We present the chemical potential vs molality curves for both solvent and solute in the form of appropriately chosen analytical equations fitted to the simulation data. As a byproduct, in the context of the force fields considered, we also obtain values for the Henry convention standard molar chemical potential for aqueous NaCl using molality as the concentration variable and for the chemical potential of pure SPC/E water. These values are in reasonable agreement with the experimental values.
DeMaere, Matthew Z.
2016-01-01
Background Chromosome conformation capture, coupled with high throughput DNA sequencing in protocols like Hi-C and 3C-seq, has been proposed as a viable means of generating data to resolve the genomes of microorganisms living in naturally occuring environments. Metagenomic Hi-C and 3C-seq datasets have begun to emerge, but the feasibility of resolving genomes when closely related organisms (strain-level diversity) are present in the sample has not yet been systematically characterised. Methods We developed a computational simulation pipeline for metagenomic 3C and Hi-C sequencing to evaluate the accuracy of genomic reconstructions at, above, and below an operationally defined species boundary. We simulated datasets and measured accuracy over a wide range of parameters. Five clustering algorithms were evaluated (2 hard, 3 soft) using an adaptation of the extended B-cubed validation measure. Results When all genomes in a sample are below 95% sequence identity, all of the tested clustering algorithms performed well. When sequence data contains genomes above 95% identity (our operational definition of strain-level diversity), a naive soft-clustering extension of the Louvain method achieves the highest performance. Discussion Previously, only hard-clustering algorithms have been applied to metagenomic 3C and Hi-C data, yet none of these perform well when strain-level diversity exists in a metagenomic sample. Our simple extension of the Louvain method performed the best in these scenarios, however, accuracy remained well below the levels observed for samples without strain-level diversity. Strain resolution is also highly dependent on the amount of available 3C sequence data, suggesting that depth of sequencing must be carefully considered during experimental design. Finally, there appears to be great scope to improve the accuracy of strain resolution through further algorithm development. PMID:27843713
Simulation of optical diagnostics for crystal growth: models and results
NASA Astrophysics Data System (ADS)
Banish, Michele R.; Clark, Rodney L.; Kathman, Alan D.; Lawson, Shelah M.
1991-12-01
A computer simulation of a two-color holographic interferometric (TCHI) optical system was performed using a physical (wave) optics model. This model accurately simulates propagation through time-varying, 2-D or 3-D concentration and temperature fields as a wave phenomenon. The model calculates wavefront deformations that can be used to generate fringe patterns. This simulation modeled a proposed TriGlycine sulphate TGS flight experiment by propagating through the simplified onion-like refractive index distribution of the growing crystal and calculating the recorded wavefront deformation. The phase of this wavefront was used to generate sample interferograms that map index of refraction variation. Two such fringe patterns, generated at different wavelengths, were used to extract the original temperature and concentration field characteristics within the growth chamber. This proves feasibility for this TCHI crystal growth diagnostic technique. This simulation provides feedback to the experimental design process.
Springback Simulation and Tool Surface Compensation Algorithm for Sheet Metal Forming
Shen Guozhe; Hu Ping; Zhang Xiangkui; Chen Xiaobin; Li Xiaoda
2005-08-05
Springback is an unquenchable forming defect in the sheet metal forming process. How to calculate springback accurately is a big challenge for a lot of FEA software. Springback compensation makes the stamped final part accordant with the designed part shape by modifying tool surface, which depends on the accurate springback amount. How ever, the meshing data based on numerical simulation is expressed by nodes and elements, such data can not be supplied directly to tool surface CAD data. In this paper, a tool surface compensation algorithm based on numerical simulation technique of springback process is proposed in which the independently developed dynamic explicit springback algorithm (DESA) is used to simulate springback amount. When doing the tool surface compensation, the springback amount of the projected point can be obtained by interpolation of the springback amount of the projected element nodes. So the modified values of tool surface can be calculated reversely. After repeating the springback and compensation calculations for 1{approx}3 times, the reasonable tool surface mesh is gained. Finally, the FEM data on the compensated tool surface is fitted into the surface by CAD modeling software. The examination of a real industrial part shows the validity of the present method.
NASA Astrophysics Data System (ADS)
Gonzalez-Mancera, Andres; Gonzalez Cardenas, Diego
2014-11-01
Flow in the microcirculation is highly dependent on the mechanical properties of the cells suspended in the plasma. Red blood cells have to deform in order to pass through the smaller sections in the microcirculation. Certain deceases change the mechanical properties of red blood cells affecting its ability to deform and the rheological behaviour of blood. We developed a hybrid algorithm based on the Lattice-Boltzmann and Finite Element methods to simulate blood flow in small capillaries. Plasma was modeled as a Newtonian fluid and the red blood cells' membrane as a hyperelastic solid. The fluid-structure interaction was handled using the immersed boundary method. We simulated the flow of plasma with suspended red blood cells through cylindrical capillaries and measured the pressure drop as a function of the membrane's rigidity. We also simulated the flow through capillaries with a restriction and identify critical properties for which the suspended particles are unable to flow. The algorithm output was verified by reproducing certain common features of flow int he microcirculation such as the Fahraeus-Lindqvist effect.
Results of a Flight Simulation Software Methods Survey
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce
1995-01-01
A ten-page questionnaire was mailed to members of the AIAA Flight Simulation Technical Committee in the spring of 1994. The survey inquired about various aspects of developing and maintaining flight simulation software, as well as a few questions dealing with characterization of each facility. As of this report, 19 completed surveys (out of 74 sent out) have been received. This paper summarizes those responses.
Results from CrIS/ATMS Obtained Using an AIRS "Version-6 like" Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis; Iredell, Lena
2015-01-01
We tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other. CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS over land, especially under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differences. Updates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.
Results from CrIS/ATMS Obtained Using an AIRS "Version-6 Like" Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis; Iredell, Lena
2015-01-01
We have tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differencesUpdates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.
NASA Astrophysics Data System (ADS)
He, Zhiwei; Tian, Baolin; Zhang, Yousheng; Gao, Fujie
2017-03-01
The present work focuses on the simulation of immiscible compressible multi-material flows with the Mie-Grüneisen-type equation of state governed by the non-conservative five-equation model [1]. Although low-order single fluid schemes have already been adopted to provide some feasible results, the application of high-order schemes (introducing relatively small numerical dissipation) to these flows may lead to results with severe numerical oscillations. Consequently, attempts to apply any interface-sharpening techniques to stop the progressively more severe smearing interfaces for a longer simulation time may result in an overshoot increase and in some cases convergence to a non-physical solution occurs. This study proposes a characteristic-based interface-sharpening algorithm for performing high-order simulations of such flows by deriving a pressure-equilibrium-consistent intermediate state (augmented with approximations of pressure derivatives) for local characteristic variable reconstruction and constructing a general framework for interface sharpening. First, by imposing a weak form of the jump condition for the non-conservative five-equation model, we analytically derive an intermediate state with pressure derivatives treated as additional parameters of the linearization procedure. Based on this intermediate state, any well-established high-order reconstruction technique can be employed to provide the state at each cell edge. Second, by designing another state with only different reconstructed values of the interface function at each cell edge, the advection term in the equation of the interface function is discretized twice using any common algorithm. The difference between the two discretizations is employed consistently for interface compression, yielding a general framework for interface sharpening. Coupled with the fifth-order improved accurate monotonicity-preserving scheme [2] for local characteristic variable reconstruction and the tangent of hyperbola
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
Tahtali, Damla; Bohmann, Ferdinand; Rostek, Peter; Wagner, Marlies; Steinmetz, Helmuth; Pfeilschifter, Waltraud
2017-01-15
Time is of the essence when caring for an acute stroke patient. The ultimate goal is to restore blood flow to the ischemic brain. This can be achieved by either thrombolysis with recombinant tissue-plasminogen activator (rt-PA), the standard therapy for stroke patients who present within the first hours of symptom onset without contraindications, or by an endovascular approach, if a proximal brain vessel occlusion is detected. As the efficacy of both therapies declines over time, every minute saved along the way will improve the patient's outcome. This critical situation requires thorough work and precise communication with the patient, the family and colleagues from different professions to acquire all relevant information and reach the right decision while carefully monitoring the patient. This is a high fidelity situation. In nonmedical high-fidelity environments such as aviation, Crew Resource Management (CRM) is used to enhance safety and team efficiency. This guide shows how a Stroke Team algorithm, which is transferable to other hospital settings, was established and how regular simulation-based trainings were performed. It requires determination and endurance to maintain these time-consuming simulation trainings on a regular basis over the course of time. However, the resulting improvement of team spirit and excellent door-to-needle times will benefit both the patients and the work environment in any hospital. A dedicated Stroke Team of 7 persons who are notified 24/7 by a collective call via speed dial and run a binding algorithm that takes approximately 20 min, was established. To train everybody involved in this algorithm, a simulation-based team training for all new Stroke Team members was conceived and conducted at monthly intervals. This led to a relevant and sustained reduction of the mean door-to-needle time to 25 min, and enhanced the feeling of stroke readiness especially in junior doctors and nurses.
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Statistical analysis of piloted simulation of real time trajectory optimization algorithms
NASA Technical Reports Server (NTRS)
Price, D. B.
1982-01-01
A simulation of time-optimal intercept algorithms for on-board computation of control commands is described. The effects of three different display modes and two different computation modes on the pilots' ability to intercept a moving target in minimum time were tested. Both computation modes employed singular perturbation theory to help simplify the two-point boundary value problem associated with trajectory optimization. Target intercept time was affected by both the display and computation modes chosen, but the display mode chosen was the only significant influence on the miss distance.
Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle
2013-01-01
The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.
Passification based simple adaptive control of quadrotor attitude: Algorithms and testbed results
NASA Astrophysics Data System (ADS)
Tomashevich, Stanislav; Belyavskyi, Andrey; Andrievsky, Boris
2017-01-01
In the paper, the results of the Passification Method with the Implicit Reference Model (IRM) approach are applied for designing the simple adaptive controller for quadrotor attitude. The IRM design technique makes it possible to relax the matching condition, known for habitual MRAC systems, and leads to simple adaptive controllers, ensuring fast tuning the controller gains, high robustness with respect to nonlinearities in the control loop, to the external disturbances and the unmodeled plant dynamics. For experimental evaluation of the adaptive systems performance, the 2DOF laboratory setup has been created. The testbed allows to safely test new control algorithms in the laboratory area with a small space and promptly make changes in cases of failure. The testing results of simple adaptive control of quadrotor attitude are presented, demonstrating efficacy of the applied simple adaptive control method. The experiments demonstrate good performance quality and high adaptation rate of the simple adaptive control system.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
NASA Technical Reports Server (NTRS)
Hartley, Craig S.
1990-01-01
To augment the capabilities of the Space Transportation System, NASA has funded studies and developed programs aimed at developing reusable, remotely piloted spacecraft and satellite servicing systems capable of delivering, retrieving, and servicing payloads at altitudes and inclinations beyond the reach of the present Shuttle Orbiters. Since the mid 1970's, researchers at the Martin Marietta Astronautics Group Space Operations Simulation (SOS) Laboratory have been engaged in investigations of remotely piloted and supervised autonomous spacecraft operations. These investigations were based on high fidelity, real-time simulations and have covered a wide range of human factors issues related to controllability. Among these are: (1) mission conditions, including thruster plume impingements and signal time delays; (2) vehicle performance variables, including control authority, control harmony, minimum impulse, and cross coupling of accelerations; (3) maneuvering task requirements such as target distance and dynamics; (4) control parameters including various control modes and rate/displacement deadbands; and (5) display parameters involving camera placement and function, visual aids, and presentation of operational feedback from the spacecraft. This presentation includes a brief description of the capabilities of the SOS Lab to simulate real-time free-flyer operations using live video, advanced technology ground and on-orbit workstations, and sophisticated computer models of on-orbit spacecraft behavior. Sample results from human factors studies in the five categories cited above are provided.
Byun, Seok Yong; Byun, Seok-Joo; Lee, Jang Kyo; Kim, Jae Wan; Lee, Taek Sung; Sheen, Dongwoo; Cho, Kyuman; Tark, Sung Ju; Kim, Donghwan; Kim, Won Mok
2012-04-01
Optimizing the design of the surface texture is an essential aspect of Si solar cell technology as it can maximize the light trapping efficiency of the cells. The proper simulation tools can provide efficient means of designing and analyzing the effects of the texture patterns on light confinement in an active medium. In this work, a newly devised algorithm termed Slab-Outline, based on a ray tracing technique, is reported. The details of the intersection searching logic adopted in Slab-Outline algorithm are also discussed. The efficiency of the logic was tested by comparing the computing time between the current algorithm and the Constructive Solid Geometry algorithm, and its superiority in computing speed was proved. The validity of the new algorithm was verified by comparing the simulated reflectance spectra with the measured spectra from a textured Si surface.
Quantum mechanical NMR simulation algorithm for protein-size spin systems.
Edwards, Luke J; Savostyanov, D V; Welderufael, Z T; Lee, Donghan; Kuprov, Ilya
2014-06-01
Nuclear magnetic resonance spectroscopy is one of the few remaining areas of physical chemistry for which polynomially scaling quantum mechanical simulation methods have not so far been available. In this communication we adapt the restricted state space approximation to protein NMR spectroscopy and illustrate its performance by simulating common 2D and 3D liquid state NMR experiments (including accurate description of relaxation processes using Bloch-Redfield-Wangsness theory) on isotopically enriched human ubiquitin - a protein containing over a thousand nuclear spins forming an irregular polycyclic three-dimensional coupling lattice. The algorithm uses careful tailoring of the density operator space to only include nuclear spin states that are populated to a significant extent. The reduced state space is generated by analysing spin connectivity and decoherence properties: rapidly relaxing states as well as correlations between topologically remote spins are dropped from the basis set.
Variable timestep algorithm for molecular dynamics simulation of non-equilibrium processes
NASA Astrophysics Data System (ADS)
Marks, Nigel A.; Robinson, Marc
2015-06-01
A simple, yet robust variable timestep algorithm is developed for use in molecular dynamics simulations of energetic processes. Single-particle Kepler orbits are studied to study the relationship between trajectory properties and the critical timestep for constant integration error. Over a wide variety of conditions the magnitude of the maximum force is found to correlate linearly with the inverse critical timestep. Other quantities used in the literature such as the time derivative of the force and the product of the velocity and force also show reasonable correlations, but not to the same extent. Application of the corresponding metric | |Fmax | | Δt in molecular dynamics simulation of radiation damage in graphite shows that the scheme is both straightforward to implement and effective. In tests on a 1 keV cascade the timestep varies by over two orders of magnitude with minimal loss of energy conservation.
An explicit algorithm for fully flexible unit cell simulation with recursive thermostat chains.
Jung, Kwangsub; Cho, Maenghyo
2008-10-28
Through the combination of the recursive multiple thermostat (RMT) Nose-Poincare and Parrinello-Rahman methods, the recursive multiple thermostat chained fully flexible unit cell (RMT-NsigmaT) molecular dynamics method is proposed for isothermal-isobaric simulation. The RMT method is known to have the advantage of achieving the ergodicity that is required for canonical sampling of the harmonic oscillator. Thus, an explicit time integration algorithm is developed for RMT-NsigmaT. We examine the ergodicity for various parameters of RMT-NsigmaT using bulk and thin film structures with different numbers of copper atoms and thicknesses in various environments. Through the numerical simulations, we conclude that the RMT-NsigmaT method is advantageous in the cases of lower temperatures.
Exploring Scheduling Algorithms and Analysis Tools for the LSST Operations Simulations
NASA Astrophysics Data System (ADS)
Petry, Catherine E.; Miller, M.; Cook, K. H.; Ridgway, S.; Chandrasekharan, S.; Jones, R. L.; Krughoff, K. S.; Ivezic, Z.; Krabbendam, V.
2012-01-01
The LSST Operations Simulator models the telescope's design-specific opto-mechanical system performance and site-specific conditions to simulate how observations may be obtained during a 10-year survey. We have found that a remarkable range of science programs are compatible with a single feasible cadence. The current version, OpSim v2.5, incorporates detailed models of the telescope and dome, the camera, weather and a more realistic model for scheduled and unscheduled downtime, as well as a scheduling strategy based on ranking requests for observations from a small number of observing modes attempting to optimize the key science objectives. Each observing mode is driven by a specific algorithm which ranks field-filter combinations of target fields to observe next. The output of the simulator is a detailed record of the activity of the telescope - such as position on the sky, slew activities, weather and various types of downtime - stored in a mySQL database. Sophisticated tools are required to mine this database in order to assess the degree of success of any simulated survey in some detail. An analysis pipeline has been created (SSTAR) which generates a standard report describing the basic characteristics of a simulated survey; a new analysis framework is being designed to allow for the inter-comparison of one or more simulated surveys and to perform more complex analyses in a pipeline fashion. Proprietary software is being used to interactively explore the database and to prototype reports for the new analysis pipeline, and we are working with the ASCOT team (http://ascot.astro.washington.edu) to determine the feasibility of creating our own interactive tools. The next phase of simulator development is being planned to include look-ahead to continue investigating the trade-offs of addressing multiple science goals within a single LSST survey.
NASA Technical Reports Server (NTRS)
Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.
1998-01-01
We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal
SIMULATION OF DNAPL DISTRIBUTION RESULTING FROM MULTIPLE SOURCES
A three-dimensional and three-phase (water, NAPL and gas) numerical simulator, called NAPL, was employed to study the interaction between DNAPL (PCE) plumes in a variably saturated porous media. Several model verification tests have been performed, including a series of 2-D labo...
FINAL SIMULATION RESULTS FOR DEMONSTRATION CASE 1 AND 2
David Sloan; Woodrow Fiveland
2003-10-15
The goal of this DOE Vision-21 project work scope was to develop an integrated suite of software tools that could be used to simulate and visualize advanced plant concepts. Existing process simulation software did not meet the DOE's objective of ''virtual simulation'' which was needed to evaluate complex cycles. The overall intent of the DOE was to improve predictive tools for cycle analysis, and to improve the component models that are used in turn to simulate equipment in the cycle. Advanced component models are available; however, a generic coupling capability that would link the advanced component models to the cycle simulation software remained to be developed. In the current project, the coupling of the cycle analysis and cycle component simulation software was based on an existing suite of programs. The challenge was to develop a general-purpose software and communications link between the cycle analysis software Aspen Plus{reg_sign} (marketed by Aspen Technology, Inc.), and specialized component modeling packages, as exemplified by industrial proprietary codes (utilized by ALSTOM Power Inc.) and the FLUENT{reg_sign} computational fluid dynamics (CFD) code (provided by Fluent Inc). A software interface and controller, based on an open CAPE-OPEN standard, has been developed and extensively tested. Various test runs and demonstration cases have been utilized to confirm the viability and reliability of the software. ALSTOM Power was tasked with the responsibility to select and run two demonstration cases to test the software--(1) a conventional steam cycle (designated as Demonstration Case 1), and (2) a combined cycle test case (designated as Demonstration Case 2). Demonstration Case 1 is a 30 MWe coal-fired power plant for municipal electricity generation, while Demonstration Case 2 is a 270 MWe, natural gas-fired, combined cycle power plant. Sufficient data was available from the operation of both power plants to complete the cycle configurations. Three runs
A Multirate Variable-timestep Algorithm for N-body Solar System Simulations with Collisions
NASA Astrophysics Data System (ADS)
Sharp, P. W.; Newman, W. I.
2016-03-01
We present and analyze the performance of a new algorithm for performing accurate simulations of the solar system when collisions between massive bodies and test particles are permitted. The orbital motion of all bodies at all times is integrated using a high-order variable-timestep explicit Runge-Kutta Nyström (ERKN) method. The variation in the timestep ensures that the orbital motion of test particles on eccentric orbits or close to the Sun is calculated accurately. The test particles are divided into groups and each group is integrated using a different sequence of timesteps, giving a multirate algorithm. The ERKN method uses a high-order continuous approximation to the position and velocity when checking for collisions across a step. We give a summary of the extensive testing of our algorithm. In our largest simulation—that of the Sun, the planets Earth to Neptune and 100,000 test particles over 100 million years—the relative error in the energy after 100 million years was of the order of 10-11.
Weisbecker, Hannah; Pierce, David M; Holzapfel, Gerhard A
2014-09-01
Finite element models reconstructed from medical imaging data, for example, computed tomography or MRI scans, generally represent geometries under in vivo load. Classical finite element approaches start from an unloaded reference configuration. We present a generalized prestressing algorithm based on a concept introduced by Gee et al. (Int. J. Num. Meth. Biomed. Eng. 26:52-72, 2012) in which an incremental update of the displacement field in the classical approach is replaced by an incremental update of the deformation gradient field. Our generalized algorithm can be implemented in existing finite element codes with relatively low implementation effort on the element level and is suitable for material models formulated in the current or initial configurations. Applicable to any finite element simulations started from preloaded geometries, we demonstrate the algorithm and its convergence properties on an academic example and on a segment of a thoracic aorta meshed from MRI data. Furthermore, we present an example to discuss the influence of neglecting prestresses in geometries obtained from medical images, a topic on which conflicting statements are found in the literature.
Results from the simulations of geopotential coefficient estimation from gravity gradients
NASA Astrophysics Data System (ADS)
Bettadpur, S.; Schutz, B. E.; Lundberg, J. B.
New information of the short and medium wavelength components of the geopotential is expected from the measurements of gravity gradients made by the future ESA Aristoteles and the NASA Superconducting Gravity Gradiometer missions. In this paper, results are presented from preliminary simulations concerning the estimation of the spherical harmonic coefficients of the geopotential expansion from gravity gradients data. Numerical issues in the brute-force inversion (BFI) of the gravity gradients data are examined, and numerical algorithms are developed that substantially speed up the computation of the potential, acceleration, and gradients, as well as the mapping from the gravity gradients to the geopotential coefficients. The solution of a large least squares problem is also examined, and computational requirements are determined for the implementation of a large scale inversion. A comparative analysis of the results from the BFI and a symmetry method is reported for the test simulations of the estimation of a degree and order 50 gravity field. The results from the two, in the presence of white noise, are seen to compare well. The latter method is implemented on a special, axially symmetric surface that fits the orbit within 380 meters.
Blodgett, Douglas; Behnke, Michael; Erdman, William
2016-08-01
The National Renewable Energy Laboratory (NREL) and NREL Next-Generation Drivetrain Partners are developing a next-generation drivetrain (NGD) design as part of a Funding Opportunity Announcement award from the U.S. Department of Energy. The proposed NGD includes comprehensive innovations to the gearbox, generator, and power converter that increase the gearbox reliability and drivetrain capacity, while lowering deployment and operation and maintenance costs. A key task within this development effort is the power converter fault control algorithm design and associated computer simulations using an integrated electromechanical model of the drivetrain. The results of this task will be used in generating the embedded control software to be utilized in the power converter during testing of the NGD in the National Wind Technology Center 2.5-MW dynamometer. A list of issues to be addressed with these algorithms was developed by review of the grid interconnection requirements of various North American transmission system operators, and those requirements that presented the greatest impact to the wind turbine drivetrain design were then selected for mitigation via power converter control algorithms.
NASA Technical Reports Server (NTRS)
Richardson, Albert O.
1997-01-01
This research has investigated the use of fuzzy logic, via the Matlab Fuzzy Logic Tool Box, to design optimized controller systems. The engineering system for which the controller was designed and simulate was the container crane. The fuzzy logic algorithm that was investigated was the 'predictive control' algorithm. The plant dynamics of the container crane is representative of many important systems including robotic arm movements. The container crane that was investigated had a trolley motor and hoist motor. Total distance to be traveled by the trolley was 15 meters. The obstruction height was 5 meters. Crane height was 17.8 meters. Trolley mass was 7500 kilograms. Load mass was 6450 kilograms. Maximum trolley and rope velocities were 1.25 meters per sec. and 0.3 meters per sec., respectively. The fuzzy logic approach allowed the inclusion, in the controller model, of performance indices that are more effectively defined in linguistic terms. These include 'safety' and 'cargo swaying'. Two fuzzy inference systems were implemented using the Matlab simulation package, namely the Mamdani system (which relates fuzzy input variables to fuzzy output variables), and the Sugeno system (which relates fuzzy input variables to crisp output variable). It is found that the Sugeno FIS is better suited to including aspects of those plant dynamics whose mathematical relationships can be determined.
Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak
2010-01-01
Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions
An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models
Wise, S.M.; Lowengrub, J.S.; Cristini, V.
2010-01-01
In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663
A novel parallel-rotation algorithm for atomistic Monte Carlo simulation of dense polymer systems
NASA Astrophysics Data System (ADS)
Santos, S.; Suter, U. W.; Müller, M.; Nievergelt, J.
2001-06-01
We develop and test a new elementary Monte Carlo move for use in the off-lattice simulation of polymer systems. This novel Parallel-Rotation algorithm (ParRot) permits moving very efficiently torsion angles that are deeply inside long chains in melts. The parallel-rotation move is extremely simple and is also demonstrated to be computationally efficient and appropriate for Monte Carlo simulation. The ParRot move does not affect the orientation of those parts of the chain outside the moving unit. The move consists of a concerted rotation around four adjacent skeletal bonds. No assumption is made concerning the backbone geometry other than that bond lengths and bond angles are held constant during the elementary move. Properly weighted sampling techniques are needed for ensuring detailed balance because the new move involves a correlated change in four degrees of freedom along the chain backbone. The ParRot move is supplemented with the classical Metropolis Monte Carlo, the Continuum-Configurational-Bias, and Reptation techniques in an isothermal-isobaric Monte Carlo simulation of melts of short and long chains. Comparisons are made with the capabilities of other Monte Carlo techniques to move the torsion angles in the middle of the chains. We demonstrate that ParRot constitutes a highly promising Monte Carlo move for the treatment of long polymer chains in the off-lattice simulation of realistic models of dense polymer systems.
An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys
Becker, R; Stolken, J; Jannetti, C; Bassani, J
2003-10-16
Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numerical simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.
Hamiltonian and potentials in derivative pricing models: exact results and lattice simulations
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani
2004-03-01
The pricing of options, warrants and other derivative securities is one of the great success of financial economics. These financial products can be modeled and simulated using quantum mechanical instruments based on a Hamiltonian formulation. We show here some applications of these methods for various potentials, which we have simulated via lattice Langevin and Monte Carlo algorithms, to the pricing of options. We focus on barrier or path dependent options, showing in some detail the computational strategies involved.
McDonnell, Mark D; Mohan, Ashutosh; Stricker, Christian
2013-01-01
The release of neurotransmitter vesicles after arrival of a pre-synaptic action potential (AP) at cortical synapses is known to be a stochastic process, as is the availability of vesicles for release. These processes are known to also depend on the recent history of AP arrivals, and this can be described in terms of time-varying probabilities of vesicle release. Mathematical models of such synaptic dynamics frequently are based only on the mean number of vesicles released by each pre-synaptic AP, since if it is assumed there are sufficiently many vesicle sites, then variance is small. However, it has been shown recently that variance across sites can be significant for neuron and network dynamics, and this suggests the potential importance of studying short-term plasticity using simulations that do generate trial-to-trial variability. Therefore, in this paper we study several well-known conceptual models for stochastic availability and release. We state explicitly the random variables that these models describe and propose efficient algorithms for accurately implementing stochastic simulations of these random variables in software or hardware. Our results are complemented by mathematical analysis and statement of pseudo-code algorithms.
Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like Retrieval Algorithm
NASA Astrophysics Data System (ADS)
Susskind, J.
2015-12-01
A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. The Goddard DISC has generated AIRS/AMSU retrieval products, extending from September 2002 through real time, using the AIRS Science Team Version-6 retrieval algorithm. Level-3 gridded monthly mean values of these products, generated using AIRS Version-6, form a state of the art multi-year set of Climate Data Records (CDRs), which is expected to continue through 2022 and possibly beyond, as the AIRS instrument is extremely stable. The goal of this research is to develop and implement a CrIS/ATMS retrieval system to generate CDRs that are compatible with, and are of comparable quality to, those generated operationally using AIRS/AMSU data. The AIRS Science Team has made considerable improvements in AIRS Science Team retrieval methodology and is working on the development of an improved AIRS Science Team Version-7 retrieval methodology to be used to reprocess all AIRS data in the relatively near future. Research is underway by Dr. Susskind and co-workers at the NASA GSFC Sounder Research Team (SRT) towards the finalization of the AIRS Version-7 retrieval algorithm, the current version of which is called SRT AIRS Version-6.22. Dr. Susskind and co-workers have developed analogous retrieval methodology for analysis of CrIS/ATMS data, called SRT CrIS Version-6.22. Results will be presented that show that AIRS and CrIS products derived using a common further improved retrieval algorithm agree closely with each other and are both superior to AIRS Version 6. The goal of the AIRS Science Team is to continue to improve both AIRS and CrIS retrieval products and then use the improved retrieval methodology for the processing of past and
NASA Astrophysics Data System (ADS)
Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya
2014-05-01
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 106-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques
Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya; Kunaseth, Manaschai; Ohmura, Satoshi; Shimamura, Kohei
2014-05-14
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of
Warshawsky, A.S.; Uzelac, M.J.; Pimper, J.E. )
1989-05-01
The Crew III algorithm for assessing time and dose dependent combat crew performance subsequent to nuclear irradiation was incorporated into the Janus combat simulation system. Battle outcomes using this algorithm were compared to outcomes based on the currently used time-independent cookie-cutter'' assessment methodology. The results illustrate quantifiable differences in battle outcome between the two assessment techniques. Results suggest that tactical nuclear weapons are more effective than currently assumed if performance degradation attributed to radiation doses between 150 to 3000 rad are taken into account. 6 refs., 9 figs.
NASA Astrophysics Data System (ADS)
García-García, J.; Martín, F.; Oriols, X.; Suñé, J.
Because of its high switching speed, low power consumption and reduced complexity to implement a given function, resonant tunneling diodes (RTD's) have been recently recognized as excellent candidates for digital circuit applications [1]. Device modeling and simulation is thus important, not only to understand mesoscopic transport properties, but also to provide guidance in optimal device design and fabrication. Several approaches have been used to this end. Among kinetic models, those based on the non-equilibrium Green function formalism [2] have gained increasing interest due to their ability to incorporate coherent and incoherent interactions in a unified formulation. The Wigner distribution function approach has been also extensively used to study quantum transport in RTD's [3-6]. The main limitations of this formulation are the semiclassical treatment of carrier-phonon interactions by means of the relaxation time approximation and the huge computational burden associated to the self-consistent solution of Liouville and Poisson equations. This has imposed severe limitations on spatial domains, these being too small to succeed in the development of reliable simulation tools. Based on the Wigner function approach, we have developed a simulation tool that allows to extend the simulation domains up to hundreds of nanometers without a significant increase in computer time [7]. This tool is based on the coupling between the Wigner distribution function (quantum Liouville equation) and the Boltzmann transport equation. The former is applied to the active region of the device including the double barrier, where quantum effects are present (quantum window, QW). The latter is solved by means of a Monte Carlo algorithm and applied to the outer regions of the device, where quantum effects are not expected to occur. Since the classical Monte Carlo algorithm is much less time consuming than the discretized version of the Wigner transport equation, we can considerably
NASA Astrophysics Data System (ADS)
Moghani, Mahdy Malekzadeh; Khomami, Bamin
2017-02-01
The computational efficiency of Brownian dynamics (BD) simulation of the constrained model of a polymeric chain (bead-rod) with n beads and in the presence of hydrodynamic interaction (HI) is reduced to the order of n2 via an efficient algorithm which utilizes the conjugate-gradient (CG) method within a Picard iteration scheme. Moreover, the utility of the Barnes and Hut (BH) multipole method in BD simulation of polymeric solutions in the presence of HI, with regard to computational cost, scaling, and accuracy, is discussed. Overall, it is determined that this approach leads to a scaling of O (n1.2) . Furthermore, a stress algorithm is developed which accurately captures the transient stress growth in the startup of flow for the bead-rod model with HI and excluded volume (EV) interaction. Rheological properties of the chains up to n =350 in the presence of EV and HI are computed via the former algorithm. The result depicts qualitative differences in shear thinning behavior of the polymeric solutions in the intermediate values of the Weissenburg number (10
Simulations Build Efficacy: Empirical Results from a Four-Week Congressional Simulation
ERIC Educational Resources Information Center
Mariani, Mack; Glenn, Brian J.
2014-01-01
This article describes a four-week congressional committee simulation implemented in upper level courses on Congress and the Legislative process at two liberal arts colleges. We find that the students participating in the simulation possessed high levels of political knowledge and confidence in their political skills prior to the simulation. An…
Direct drive: Simulations and results from the National Ignition Facility
Radha, P. B.; Hohenberger, M.; Edgell, D. H.; Marozas, J. A.; Marshall, F. J.; Michel, D. T.; Rosenberg, M. J.; Seka, W.; Shvydky, A.; Boehly, T. R.; Collins, T. J. B.; Campbell, E. M.; Craxton, R. S.; Delettrez, J. A.; Dixit, S. N.; Frenje, J. A.; Froula, D. H.; Goncharov, V. N.; Hu, S. X.; Knauer, J. P.; McCrory, R. L.; McKenty, P. W.; Meyerhofer, D. D.; Moody, J.; Myatt, J. F.; Petrasso, R. D.; Regan, S. P.; Sangster, T. C.; Sio, H.; Skupsky, S.; Zylstra, A.
2016-04-19
Here, the direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivity analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.
Direct drive: Simulations and results from the National Ignition Facility
Radha, P. B.; Hohenberger, M.; Edgell, D. H.; ...
2016-04-19
Here, the direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivitymore » analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.« less
NASA Astrophysics Data System (ADS)
Madura, Thomas; Clementel, Nicola; Kruip, Chael; Icke, Vincent; Gull, Theodore
2014-09-01
We present the first results of full 3D radiative transfer simulations of the colliding stellar winds in a massive binary system. We accomplish this by applying the SIMPLEX algorithm for 3D radiative transfer on an unstructured Delaunay grid to recent 3D smoothed particle hydrodynamics (SPH) simulations of the colliding winds in the binary system η Carinae. We use SIMPLEX to obtain detailed ionization fractions of hydrogen and helium, in 3D, at the resolution of the original SPH simulations. We show how the SIMPLEX simulations can be used to generate synthetic spectral data cubes for comparison to data obtained with the Hubble Space Telescope (HST)/Space Telescope Imaging Spectrograph as part of a multi-cycle program to map changes in η Car's extended interacting wind structures across one binary cycle. Comparison of the HST observations to the SIMPLEX models can help lead to more accurate constraints on the orbital, stellar, and wind parameters of the η Car system, such as the primary's mass-loss rate and the companion's temperature and luminosity. While we initially focus specifically on the η Car binary, the numerical methods employed can be applied to numerous other colliding wind (WR140, WR137, WR19) and dusty 'pinwheel' (WR104, WR98a) binary systems. One of the biggest remaining mysteries is how dust can form and survive in such systems that contain a hot, luminous O star. Coupled with 3D hydrodynamical simulations, SIMPLEX simulations have the potential to help determine the regions where dust can form and survive in these unique objects.
Implementation and Simulation Results using Autonomous Aerobraking Development Software
NASA Technical Reports Server (NTRS)
Maddock, Robert W.; DwyerCianciolo, Alicia M.; Bowes, Angela; Prince, Jill L. H.; Powell, Richard W.
2011-01-01
An Autonomous Aerobraking software system is currently under development with support from the NASA Engineering and Safety Center (NESC) that would move typically ground-based operations functions to onboard an aerobraking spacecraft, reducing mission risk and mission cost. The suite of software that will enable autonomous aerobraking is the Autonomous Aerobraking Development Software (AADS) and consists of an ephemeris model, onboard atmosphere estimator, temperature and loads prediction, and a maneuver calculation. The software calculates the maneuver time, magnitude and direction commands to maintain the spacecraft periapsis parameters within design structural load and/or thermal constraints. The AADS is currently tested in simulations at Mars, with plans to also evaluate feasibility and performance at Venus and Titan.
Polynomial-time quantum algorithm for the simulation of chemical dynamics.
Kassal, Ivan; Jordan, Stephen P; Love, Peter J; Mohseni, Masoud; Aspuru-Guzik, Alán
2008-12-02
The computational cost of exact methods for quantum simulation using classical computers grows exponentially with system size. As a consequence, these techniques can be applied only to small systems. By contrast, we demonstrate that quantum computers could exactly simulate chemical reactions in polynomial time. Our algorithm uses the split-operator approach and explicitly simulates all electron-nuclear and interelectronic interactions in quadratic time. Surprisingly, this treatment is not only more accurate than the Born-Oppenheimer approximation but faster and more efficient as well, for all reactions with more than about four atoms. This is the case even though the entire electronic wave function is propagated on a grid with appropriately short time steps. Although the preparation and measurement of arbitrary states on a quantum computer is inefficient, here we demonstrate how to prepare states of chemical interest efficiently. We also show how to efficiently obtain chemically relevant observables, such as state-to-state transition probabilities and thermal reaction rates. Quantum computers using these techniques could outperform current classical computers with 100 qubits.
An efficient algorithm for fully resolved simulation of freely swimming bodies
NASA Astrophysics Data System (ADS)
Shirgaonkar, Anup; Patankar, Neelesh; Maciver, Malcolm
2007-11-01
There is a need to better understand the physical principles underlying the extraordinary mobility of swimming and flying animals. To that end, we present a fully resolved simulation scheme for aquatic locomotion that is sufficiently general to potentially function for small flying animals as well. The method combines the rigid particulate scheme of Patankar et al. (IJMF, 2001) with a momentum redistribution scheme to consistently solve for fluid-body forces as well as the swimming velocity. The input to the algorithm is the deforming motion of the fish body or its fins in the frame of reference of the fish. The method is designed to be efficient, parallelizable, and can be easily implemented into existing fluid dynamics codes. We demonstrate that the new method is capable of simulating variety of fish forms including flexible bodies such as an eel, or bodies with flexible fins attached to them such as the blackghost knifefish (Apteronotus albifrons). Insights into the hydrodynamics of aquatic locomotion based on our simulations will be summarized. The proposed technique is also applicable to variety of problems such as designing underwater vehicles, neuromechanical modeling, understanding the role of hydrodynamics on the evolution of fish forms, and animation.
NASA Astrophysics Data System (ADS)
Tang, Yu-Hang; Karniadakis, George; Crunch Team
2014-03-01
We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to illustrate the practicality of our code in real-world applications. This work was supported by the new Department of Energy Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4). Simulations were carried out at the Oak Ridge Leadership Computing Facility through the INCITE program under project BIP017.
NASA Astrophysics Data System (ADS)
Bonne, F.; Alamir, M.; Bonnay, P.
2017-02-01
This paper deals with multivariable constrained model predictive control for Warm Compression Stations (WCS). WCSs are subject to numerous constraints (limits on pressures, actuators) that need to be satisfied using appropriate algorithms. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to achieve precise control of pressures in normal operation or to avoid reaching stopping criteria (such as excessive pressures) under high disturbances (such as a pulsed heat load expected to take place in future fusion reactors, expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details the simulator used to validate this new control scheme and the associated simulation results on the SBTs WCS. This work is partially supported through the French National Research Agency (ANR), task agreement ANR-13-SEED-0005.
Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise
NASA Astrophysics Data System (ADS)
Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej
2010-11-01
The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.
Hsu, Ching-Chi
2013-07-01
Subsidence of interbody devices into the vertebral body might result in serious clinical problems, especially when the devices are not well designed and analyzed. Recently, some novel designs were proposed to reduce the risk of subsidence, but those designs are based on the researcher's experience. The purpose of this study was to discover the interbody device design with excellent subsidence resistance by changing the device's shape. The three-dimensional nonlinear finite element models, which consisted of the interbody device and vertebral body, were created first. Then, the simulation-based genetic algorithm, which combined the finite element model and the searching algorithm, was developed by using ANSYS® Parametric Design Language. Finally, the numerical results were carefully validated with the use of biomechanical tests. The optimum shape design obtained in this study looks like a flower with many petals and it has excellent subsidence resistance when compared with the other designs provided by the past studies. The results of the present study could help surgeons to understand the subsidence resistance of interbody devices in terms of their shapes and has directly provided the design rationales to engineers.
Simulating Visual Learning and Optical Illusions via a Network-Based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Siu, Theodore; Vivar, Miguel; Shinbrot, Troy
We present a neural network model that uses a genetic algorithm to identify spatial patterns. We show that the model both learns and reproduces common visual patterns and optical illusions. Surprisingly, we find that the illusions generated are a direct consequence of the network architecture used. We discuss the implications of our results and the insights that we gain on how humans fall for optical illusions
Jackson, Jennifer N; Hass, Chris J; Fregly, Benjamin J
2015-11-01
Patient-specific gait optimizations capable of predicting post-treatment changes in joint motions and loads could improve treatment design for gait-related disorders. To maximize potential clinical utility, such optimizations should utilize full-body three-dimensional patient-specific musculoskeletal models, generate dynamically consistent gait motions that reproduce pretreatment marker measurements closely, and achieve accurate foot motion tracking to permit deformable foot-ground contact modeling. This study enhances an existing residual elimination algorithm (REA) Remy, C. D., and Thelen, D. G., 2009, “Optimal Estimation of Dynamically Consistent Kinematics and Kinetics for Forward Dynamic Simulation of Gait,” ASME J. Biomech. Eng., 131(3), p. 031005) to achieve all three requirements within a single gait optimization framework. We investigated four primary enhancements to the original REA: (1) manual modification of tracked marker weights, (2) automatic modification of tracked joint acceleration curves, (3) automatic modification of algorithm feedback gains, and (4) automatic calibration of model joint and inertial parameter values. We evaluated the enhanced REA using a full-body three-dimensional dynamic skeletal model and movement data collected from a subject who performed four distinct gait patterns: walking, marching, running, and bounding. When all four enhancements were implemented together, the enhanced REA achieved dynamic consistency with lower marker tracking errors for all segments, especially the feet (mean root-mean-square (RMS) errors of 3.1 versus 18.4 mm), compared to the original REA. When the enhancements were implemented separately and in combinations, the most important one was automatic modification of tracked joint acceleration curves, while the least important enhancement was automatic modification of algorithm feedback gains. The enhanced REA provides a framework for future gait optimization studies that seek to predict subject
NASA Astrophysics Data System (ADS)
Chen, Xin; Xing, Pei; Luo, Yong; Zhao, Zongci; Nie, Suping; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua
2015-04-01
A new dataset of annual mean surface temperature has been constructed over North America in recent 500 years by performing optimal interpolation (OI) algorithm. Totally, 149 series totally were screened out including 69 tree ring width (MXD) and 80 tree ring width (TRW) chronologies are screened from International Tree Ring Data Bank (ITRDB). The simulated annual mean surface temperature derives from the past1000 experiment results of Community Climate System Model version 4 (CCSM4). Different from existing research that applying data assimilation approach to (General Circulation Models) GCMs simulation, the errors of both the climate model simulation and tree ring reconstruction were considered, with a view to combining the two parts in an optimal way. Variance matching (VM) was employed to calibrate tree ring chronologies on CRUTEM4v, and corresponding errors were estimated through leave-one-out process. Background error covariance matrix was estimated from samples of simulation results in a running 30-year window in a statistical way. Actually, the background error covariance matrix was calculated locally within the scanning range (2000km in this research). Thus, the merging process continued with a time-varying local gain matrix. The merging method (MM) was tested by two kinds of experiments, and the results indicated standard deviation of errors can be reduced by about 0.3 degree centigrade lower than tree ring reconstructions and 0.5 degree centigrade lower than model simulation. During the recent Obvious decadal variability can be identified in MM results including the evident cooling (0.10 degree per decade) in 1940-60s, wherein the model simulation exhibit a weak increasing trend (0.05 degree per decade) instead. MM results revealed a compromised spatial pattern of the linear trend of surface temperature during a typical period (1601-1800 AD) in Little Ice Age, which basically accorded with the phase transitions of the Pacific decadal oscillation (PDO) and
Koh, Wonryull; Blackwell, Kim T.
2011-01-01
Stochastic simulation of reaction–diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction–diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies. PMID:21513371
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.
Preliminary Analysis of a Breadth-First Parsing Algorithm: Theoretical and Experimental Results.
1981-06-01
also constructed synthetic sequences which generate products of two Catalan numbers and the Fibonacci [20] numbers. These will be presented in turn. One...WORDS (Continue on reveree side if neceemary md Identity by block nuiiber) Parsing, chart parsing, natural language processing, Earley’s algorithm 21...Words: Parsing, Chart Parsing, Natural Language Processing, Earley’s Algorithm V. This research was supported (in part) by the National Institutes of
A TR-induced algorithm for hot spots elimination through CT-scan HIFU simulations
NASA Astrophysics Data System (ADS)
Leduc, Nicolas; Okita, Kohei; Sugiyama, Kazuyasu; Takagi, Shu; Matsumoto, Yoichiro
2011-09-01
Although nowadays widely spread for imaging and treatments uses, HIFU techniques are still limited by the distortion of the wavefront due to refraction and reflection on the inhomogeneous media inside the human body. CT-scan Time Reversal (TR) procedure has risen as a promising candidate for focus control. A finite difference time domain parallelized code is used to provide simulations of TR-enhanced propagation through elements of the human body and implement a simple algorithm to address the issue of grating lobes, i.e secondary peaks of pressure due to natural diffraction by phased arrays and enhanced by medium heterogeneity. Using an iterative, progressive process combining secondary sound sources and independent signal summation, the primary peak is strengthened while secondary peaks are increasingly obliterated. This method supports the feasibility of precise modification and enhancement of the pressure profile in the targeted area through Time Reversal based solutions.
An infrared achromatic quarter-wave plate designed based on simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Pang, Yajun; Zhang, Yinxin; Huang, Zhanhua; Yang, Huaidong
2017-03-01
Quarter-wave plates are primarily used to change the polarization state of light. Their retardation usually varies depending on the wavelength of the incident light. In this paper, the design and characteristics of an achromatic quarter-wave plate, which is formed by a cascaded system of birefringent plates, are studied. For the analysis of the combination, we use Jones matrix method to derivate the general expressions of the equivalent retardation and the equivalent azimuth. The infrared achromatic quarter-wave plate is designed based on the simulated annealing (SA) algorithm. The maximum retardation variation and the maximum azimuth variation of this achromatic waveplate are only about 1.8 ° and 0.5 ° , respectively, over the entire wavelength range of 1250-1650 nm. This waveplate can change the linear polarized light into circular polarized light with a less than 3.2% degree of linear polarization (DOLP) over that wide wavelength range.
NASA Technical Reports Server (NTRS)
Longendorfer, B. A.
1976-01-01
The construction of an autonomous roving vehicle requires the development of complex data-acquisition and processing systems, which determine the path along which the vehicle travels. Thus, a vehicle must possess algorithms which can (1) reliably detect obstacles by processing sensor data, (2) maintain a constantly updated model of its surroundings, and (3) direct its immediate actions to further a long range plan. The first function consisted of obstacle recognition. Obstacles may be identified by the use of edge detection techniques. Therefore, the Kalman Filter was implemented as part of a large scale computer simulation of the Mars Rover. The second function consisted of modeling the environment. The obstacle must be reconstructed from its edges, and the vast amount of data must be organized in a readily retrievable form. Therefore, a Terrain Modeller was developed which assembled and maintained a rectangular grid map of the planet. The third function consisted of directing the vehicle's actions.
Dong, Feng; Pierpaoli, Elena; Gunn, James E.; Wechsler, Risa H.
2007-10-29
We present a modified adaptive matched filter algorithm designed to identify clusters of galaxies in wide-field imaging surveys such as the Sloan Digital Sky Survey. The cluster-finding technique is fully adaptive to imaging surveys with spectroscopic coverage, multicolor photometric redshifts, no redshift information at all, and any combination of these within one survey. It works with high efficiency in multi-band imaging surveys where photometric redshifts can be estimated with well-understood error distributions. Tests of the algorithm on realistic mock SDSS catalogs suggest that the detected sample is {approx} 85% complete and over 90% pure for clusters with masses above 1.0 x 10{sup 14}h{sup -1} M and redshifts up to z = 0.45. The errors of estimated cluster redshifts from maximum likelihood method are shown to be small (typically less that 0.01) over the whole redshift range with photometric redshift errors typical of those found in the Sloan survey. Inside the spherical radius corresponding to a galaxy overdensity of {Delta} = 200, we find the derived cluster richness {Lambda}{sub 200} a roughly linear indicator of its virial mass M{sub 200}, which well recovers the relation between total luminosity and cluster mass of the input simulation.
Time controlled descent guidance algorithm for simulation of advanced ATC systems
NASA Technical Reports Server (NTRS)
Lee, H. Q.; Erzberger, H.
1983-01-01
Concepts and computer algorithms for generating time controlled four dimensional descent trajectories are described. The algorithms were implemented in the air traffic control simulator and used by experienced controllers in studies of advanced air traffic flow management procedures. A time controlled descent trajectory comprises a vector function of time, including position, altitude, and heading, that starts at the initial position of the aircraft and ends at touchdown. The trajectory provides a four dimensional reference path which will cause an aircraft tracking it to touchdown at a predetermined time with a minimum of fuel consumption. The problem of constructing such trajectories is divided into three subproblems involving synthesis of horizontal, vertical, and speed profiles. The horizontal profile is constructed as a sequence of turns and straight lines passing through a specified set of waypoints. The vertical profile consists of a sequence of level flight and constant descent angle segments defined by altitude waypoints. The speed profile is synthesized as a sequence of constant Mach number, constant indicated airspeed, and acceleration/deceleration legs. It is generated by integrating point mass differential equations of motion, which include the thrust and drag models of the aircraft.
Stellar populations of stellar halos: Results from the Illustris simulation
NASA Astrophysics Data System (ADS)
Cook, B. A.; Conroy, C.; Pillepich, A.; Hernquist, L.
2016-08-01
The influence of both major and minor mergers is expected to significantly affect gradients of stellar ages and metallicities in the outskirts of galaxies. Measurements of observed gradients are beginning to reach large radii in galaxies, but a theoretical framework for connecting the findings to a picture of galactic build-up is still in its infancy. We analyze stellar populations of a statistically representative sample of quiescent galaxies over a wide mass range from the Illustris simulation. We measure metallicity and age profiles in the stellar halos of quiescent Illustris galaxies ranging in stellar mass from 1010 to 1012 M ⊙, accounting for observational projection and luminosity-weighting effects. We find wide variance in stellar population gradients between galaxies of similar mass, with typical gradients agreeing with observed galaxies. We show that, at fixed mass, the fraction of stars born in-situ within galaxies is correlated with the metallicity gradient in the halo, confirming that stellar halos contain unique information about the build-up and merger histories of galaxies.
Optimization and Simulation of Plastic Injection Process using Genetic Algorithm and Moldflow
NASA Astrophysics Data System (ADS)
Martowibowo, Sigit Yoewono; Kaswadi, Agung
2017-03-01
The use of plastic-based products is continuously increasing. The increasing demands for thinner products, lower production costs, yet higher product quality has triggered an increase in the number of research projects on plastic molding processes. An important branch of such research is focused on mold cooling system. Conventional cooling systems are most widely used because they are easy to make by using conventional machining processes. However, the non-uniform cooling processes are considered as one of their weaknesses. Apart from the conventional systems, there are also conformal cooling systems that are designed for faster and more uniform plastic mold cooling. In this study, the conformal cooling system is applied for the production of bowl-shaped product made of PP AZ564. Optimization is conducted to initiate machine setup parameters, namely, the melting temperature, injection pressure, holding pressure and holding time. The genetic algorithm method and Moldflow were used to optimize the injection process parameters at a minimum cycle time. It is found that, an optimum injection molding processes could be obtained by setting the parameters to the following values: T M = 180 °C; P inj = 20 MPa; P hold = 16 MPa and t hold = 8 s, with a cycle time of 14.11 s. Experiments using the conformal cooling system yielded an average cycle time of 14.19 s. The studied conformal cooling system yielded a volumetric shrinkage of 5.61% and the wall shear stress was found at 0.17 MPa. The difference between the cycle time obtained through simulations and experiments using the conformal cooling system was insignificant (below 1%). Thus, combining process parameters optimization and simulations by using genetic algorithm method with Moldflow can be considered as valid.
Algorithm for direct numerical simulation of emulsion flow through a granular material
NASA Astrophysics Data System (ADS)
Zinchenko, Alexander Z.; Davis, Robert H.
2008-08-01
A multipole-accelerated 3D boundary-integral algorithm capable of modelling an emulsion flow through a granular material by direct multiparticle-multidrop simulations in a periodic box is developed and tested. The particles form a random arrangement at high volume fraction rigidly held in space (including the case of an equilibrium packing in mechanical contact). Deformable drops (with non-deformed diameter comparable with the particle size) squeeze between the particles under a specified average pressure gradient. The algorithm includes recent boundary-integral desingularization tools especially important for drop-solid and drop-drop interactions, the Hebeker representation for solid particle contributions, and unstructured surface triangulations with fixed topology. Multipole acceleration, with two levels of mesh node decomposition (entire drop/solid surfaces and "patches"), is a significant improvement over schemes used in previous, purely multidrop simulations; it remains efficient at very high resolutions ( 104- 105 triangular elements per surface) and has no lower limitation on the number of particles or drops. Such resolutions are necessary in the problem to alleviate lubrication difficulties, especially for near-critical squeezing conditions, as well as using ˜104 time steps and an iterative solution at each step, both for contrast and matching viscosities. Examples are shown for squeezing of 25-40 drops through an array of 9-14 solids, with the total volume fraction of 70% for particles and drops. The flow rates for the drop and continuous phases are calculated. Extensive convergence testing with respect to program parameters (triangulation, multipole truncation, etc.) is made.
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard
2012-01-01
The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error
Ando, Tadashi; Chow, Edmond; Skolnick, Jeffrey
2013-01-01
Hydrodynamic interactions exert a critical effect on the dynamics of macromolecules. As the concentration of macromolecules increases, by analogy to the behavior of semidilute polymer solutions or the flow in porous media, one might expect hydrodynamic screening to occur. Hydrodynamic screening would have implications both for the understanding of macromolecular dynamics as well as practical implications for the simulation of concentrated macromolecular solutions, e.g., in cells. Stokesian dynamics (SD) is one of the most accurate methods for simulating the motions of N particles suspended in a viscous fluid at low Reynolds number, in that it considers both far-field and near-field hydrodynamic interactions. This algorithm traditionally involves an O(N3) operation to compute Brownian forces at each time step, although asymptotically faster but more complex SD methods are now available. Motivated by the idea of hydrodynamic screening, the far-field part of the hydrodynamic matrix in SD may be approximated by a diagonal matrix, which is equivalent to assuming that long range hydrodynamic interactions are completely screened. This approximation allows sparse matrix methods to be used, which can reduce the apparent computational scaling to O(N). Previously there were several simulation studies using this approximation for monodisperse suspensions. Here, we employ newly designed preconditioned iterative methods for both the computation of Brownian forces and the solution of linear systems, and consider the validity of this approximation in polydisperse suspensions. We evaluate the accuracy of the diagonal approximation method using an intracellular-like suspension. The diffusivities of particles obtained with this approximation are close to those with the original method. However, this approximation underestimates intermolecular correlated motions, which is a trade-off between accuracy and computing efficiency. The new method makes it possible to perform large-scale and
NASA Technical Reports Server (NTRS)
Gatski, T. B.; Grosch, C. E.; Rose, M. E.; Spall, R. E.
1987-01-01
A numerical algorithm is presented which is used to solve the unsteady, fully three-dimensional, incompressible Navier-Stokes equations in vorticity-velocity variables. A discussion of the discrete approximation scheme is presented as well as the solution method used to solve the resulting algebraic set of difference equations. Second order spatial and temporal accuracy is verified through solution comparisons with exact results obtained for steady three-dimensional stagnation point flow and unsteady axisymmetric vortex spin-up. In addition, results are presented for the problem of unsteady bubble-type vortex breakdown with emphasis on internal bubble dynamics and structure.
Results from modeling and simulation of chemical downstream etch systems
Meeks, E.; Vosen, S.R.; Shon, J.W.; Larson, R.S.; Fox, C.A.; Buchenauer
1996-05-01
This report summarizes modeling work performed at Sandia in support of Chemical Downstream Etch (CDE) benchmark and tool development programs under a Cooperative Research and Development Agreement (CRADA) with SEMATECH. The Chemical Downstream Etch (CDE) Modeling Project supports SEMATECH Joint Development Projects (JDPs) with Matrix Integrated Systems, Applied Materials, and Astex Corporation in the development of new CDE reactors for wafer cleaning and stripping processes. These dry-etch reactors replace wet-etch steps in microelectronics fabrication, enabling compatibility with other process steps and reducing the use of hazardous chemicals. Models were developed at Sandia to simulate the gas flow, chemistry and transport in CDE reactors. These models address the essential components of the CDE system: a microwave source, a transport tube, a showerhead/gas inlet, and a downstream etch chamber. The models have been used in tandem to determine the evolution of reactive species throughout the system, and to make recommendations for process and tool optimization. A significant part of this task has been in the assembly of a reasonable set of chemical rate constants and species data necessary for successful use of the models. Often the kinetic parameters were uncertain or unknown. For this reason, a significant effort was placed on model validation to obtain industry confidence in the model predictions. Data for model validation were obtained from the Sandia Molecular Beam Mass Spectrometry (MBMS) experiments, from the literature, from the CDE Benchmark Project (also part of the Sandia/SEMATECH CRADA), and from the JDP partners. The validated models were used to evaluate process behavior as a function of microwave-source operating parameters, transport-tube geometry, system pressure, and downstream chamber geometry. In addition, quantitative correlations were developed between CDE tool performance and operation set points.
Time-step Considerations in Particle Simulation Algorithms for Coulomb Collisions in Plasmas
Cohen, B I; Dimits, A; Friedman, A; Caflisch, R
2009-10-29
The accuracy of first-order Euler and higher-order time-integration algorithms for grid-based Langevin equations collision models in a specific relaxation test problem is assessed. We show that statistical noise errors can overshadow time-step errors and argue that statistical noise errors can be conflated with time-step effects. Using a higher-order integration scheme may not achieve any benefit in accuracy for examples of practical interest. We also investigate the collisional relaxation of an initial electron-ion relative drift and the collisional relaxation to a resistive steady-state in which a quasi-steady current is driven by a constant applied electric field, as functions of the time step used to resolve the collision processes using binary and grid-based, test-particle Langevin equations models. We compare results from two grid-based Langevin equations collision algorithms to results from a binary collision algorithm for modeling electronion collisions. Some guidance is provided regarding how large a time step can be used compared to the inverse of the characteristic collision frequency for specific relaxation processes.
NASA Astrophysics Data System (ADS)
Jung, Joon-Hee; Arakawa, Akio
2010-04-01
A new framework for modeling the atmosphere, which we call the quasi-3D (Q3D) multi-scale modeling framework (MMF), is developed with the objective of including cloud-scale three-dimensional effects in a GCM without necessarily using a global cloud-resolving model (CRM). It combines a GCM with a Q3D CRM that has the horizontal domain consisting of two perpendicular sets of channels, each of which contains a locally 3D grid-point array. For computing efficiency, the widths of the channels are chosen to be narrow. Thus, it is crucial to select a proper lateral boundary condition to realistically simulate the statistics of cloud and cloud-associated processes. Among the various possibilities, a periodic lateral boundary condition is chosen for the deviations from background fields that are obtained by interpolations from the GCM grid points. Since the deviations tend to vanish as the GCM grid size approaches that of the CRM, the whole system of the Q3D MMF can converge to a fully 3D global CRM. Consequently, the horizontal resolution of the GCM can be freely chosen depending on the objective of application, without changing the formulation of model physics. To evaluate the newly developed Q3D CRM in an efficient way, idealized experiments have been performed using a small horizontal domain. In these tests, the Q3D CRM uses only one pair of perpendicular channels with only two grid points across each channel. Comparing the simulation results with those of a fully 3D CRM, it is concluded that the Q3D CRM can reproduce most of the important statistics of the 3D solutions, including the vertical distributions of cloud water and precipitants, vertical transports of potential temperature and water vapor, and the variances and covariances of dynamical variables. The main improvement from a corresponding 2D simulation appears in the surface fluxes and the vorticity transports that cause the mean wind to change. A comparison with a simulation using a coarse-resolution 3D CRM
Urbina-Villalba, German
2009-03-01
The first algorithm for Emulsion Stability Simulations (ESS) was presented at the V Conferencia Iberoamericana sobre Equilibrio de Fases y Diseño de Procesos [Luis, J.; García-Sucre, M.; Urbina-Villalba, G. Brownian Dynamics Simulation of Emulsion Stability In: Equifase 99. Libro de Actas, 1(st) Ed., Tojo J., Arce, A., Eds.; Solucion's: Vigo, Spain, 1999; Volume 2, pp. 364-369]. The former version of the program consisted on a minor modification of the Brownian Dynamics algorithm to account for the coalescence of drops. The present version of the program contains elaborate routines for time-dependent surfactant adsorption, average diffusion constants, and Ostwald ripening.
Diamond-NICAM-SPRINTARS: downscaling and simulation results
NASA Astrophysics Data System (ADS)
Uchida, J.
2012-12-01
As a part of initiative "Research Program on Climate Change Adaptation" (RECCA) which investigates how predicted large-scale climate change may affect a local weather, and further examines possible atmospheric hazards that cities may encounter due to such a climate change, thus to guide policy makers on implementing new environmental measures, a "Development of Seamless Chemical AssimiLation System and its Application for Atmospheric Environmental Materials" (SALSA) project is funded by the Japanese Ministry of Education, Culture, Sports, Science and Technology and is focused on creating a regional (local) scale assimilation system that can accurately recreate and predict a transport of carbon dioxide and other air pollutants. In this study, a regional model of the next generation global cloud-resolving model NICAM (Non-hydrostatic ICosahedral Atmospheric Model) (Tomita and Satoh, 2004) is used and ran together with a transport model SPRINTARS (Spectral Radiation Transport Model for Aerosol Species) (Takemura et al, 2000) and a chemical transport model CHASER (Sudo et al, 2002) to simulate aerosols across urban cities (over a Kanto region including metropolitan Tokyo). The presentation will mainly be on a "Diamond-NICAM" (Figure 1), a regional climate model version of the global climate model NICAM, and its dynamical downscaling methodologies. Originally, a global NICAM can be described as twenty identical equilateral triangular-shaped panels covering the entire globe where grid points are at the corners of those panels, and to increase a resolution (called a "global-level" in NICAM), additional points are added at the middle of existing two adjacent points, so a number of panels increases by fourfold with an increment of one global-level. On the other hand, a Diamond-NICAM only uses two of those initial triangular-shaped panels, thus only covers part of the globe. In addition, NICAM uses an adaptive mesh scheme and its grid size can gradually decrease, as the grid
Electron transport in the solar wind -results from numerical simulations
NASA Astrophysics Data System (ADS)
Smith, Håkan; Marsch, Eckart; Helander, Per
A conventional fluid approach is in general insufficient for a correct description of electron trans-port in weakly collisional plasmas such as the solar wind. The classical Spitzer-Hürm theory is a not valid when the Knudsen number (the mean free path divided by the length scale of tem-perature variation) is greater than ˜ 10-2 . Despite this, the heat transport from Spitzer-Hürm a theory is widely used in situations with relatively long mean free paths. For realistic Knud-sen numbers in the solar wind, the electron distribution function develops suprathermal tails, and the departure from a local Maxwellian can be significant at the energies which contribute the most to the heat flux moment. To accurately model heat transport a kinetic approach is therefore more adequate. Different techniques have been used previously, e.g. particle sim-ulations [Landi, 2003], spectral methods [Pierrard, 2001], the so-called 16 moment method [Lie-Svendsen, 2001], and approximation by kappa functions [Dorelli, 2003]. In the present study we solve the Fokker-Planck equation for electrons in one spatial dimension and two velocity dimensions. The distribution function is expanded in Laguerre polynomials in energy, and a finite difference scheme is used to solve the equation in the spatial dimension and the velocity pitch angle. The ion temperature and density profiles are assumed to be known, but the electric field is calculated self-consistently to guarantee quasi-neutrality. The kinetic equation is of a two-way diffusion type, for which the distribution of particles entering the computational domain in both ends of the spatial dimension must be specified, leaving the outgoing distributions to be calculated. The long mean free path of the suprathermal electrons has the effect that the details of the boundary conditions play an important role in determining the particle and heat fluxes as well as the electric potential drop across the domain. Dorelli, J. C., and J. D. Scudder, J. D
NASA Technical Reports Server (NTRS)
Morrell, F. R.; Bailey, M. L.; Motyka, P. R.
1988-01-01
Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.
Yang, Qidong; Zuo, Hongchao; Li, Weidong
2016-01-01
Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large.
Covariate-Based Assignment to Treatment Groups: Some Simulation Results.
ERIC Educational Resources Information Center
Jain, Ram B.; Hsu, Tse-Chi
1980-01-01
Six estimators of treatment effect when assignment to treatment groups is based on the covariate are compared in terms of empirical standard errors and percent relative bias. Results show that simple analysis of covariance estimator is not always appropriate. (Author/GK)
Preliminary Benchmarking and MCNP Simulation Results for Homeland Security
Robert Hayes
2008-03-01
The purpose of this article is to create Monte Carlo N-Particle (MCNP) input stacks for benchmarked measurements sufficient for future perturbation studies and analysis. The approach was to utilize historical experimental measurements to recreate the empirical spectral results in MCNP, both qualitatively and quantitatively. Results demonstrate that perturbation analysis of benchmarked MCNP spectra can be used to obtain a better understanding of field measurement results which may be of national interest. If one or more spectral radiation measurements are made in the field and deemed of national interest, the potential source distribution, naturally occurring radioactive material shielding, and interstitial materials can only be estimated in many circumstances. The effects from these factors on the resultant spectral radiation measurements can be very confusing. If benchmarks exist which are sufficiently similar to the suspected configuration, these benchmarks can then be compared to the suspect measurements. Having these benchmarks with validated MCNP input stacks can substantially improve the predictive capability of experts supporting these efforts.
Analysis of Numerical Simulation Results of LIPS-200 Lifetime Experiments
NASA Astrophysics Data System (ADS)
Chen, Juanjuan; Zhang, Tianping; Geng, Hai; Jia, Yanhui; Meng, Wei; Wu, Xianming; Sun, Anbang
2016-06-01
Accelerator grid structural and electron backstreaming failures are the most important factors affecting the ion thruster's lifetime. During the thruster's operation, Charge Exchange Xenon (CEX) ions are generated from collisions between plasma and neutral atoms. Those CEX ions grid's barrel and wall frequently, which cause the failures of the grid system. In order to validate whether the 20 cm Lanzhou Ion Propulsion System (LIPS-200) satisfies China's communication satellite platform's application requirement for North-South Station Keeping (NSSK), this study analyzed the measured depth of the pit/groove on the accelerator grid's wall and aperture diameter's variation and estimated the operating lifetime of the ion thruster. Different from the previous method, in this paper, the experimental results after the 5500 h of accumulated operation of the LIPS-200 ion thruster are presented firstly. Then, based on these results, theoretical analysis and numerical calculations were firstly performed to predict the on-orbit lifetime of LIPS-200. The results obtained were more accurate to calculate the reliability and analyze the failure modes of the ion thruster. The results indicated that the predicted lifetime of LIPS-200's was about 13218.1 h which could satisfy the required lifetime requirement of 11000 h very well.
Real-Time Simulation for Verification and Validation of Diagnostic and Prognostic Algorithms
NASA Technical Reports Server (NTRS)
Aguilar, Robet; Luu, Chuong; Santi, Louis M.; Sowers, T. Shane
2005-01-01
To verify that a health management system (HMS) performs as expected, a virtual system simulation capability, including interaction with the associated platform or vehicle, very likely will need to be developed. The rationale for developing this capability is discussed and includes the limited capability to seed faults into the actual target system due to the risk of potential damage to high value hardware. The capability envisioned would accurately reproduce the propagation of a fault or failure as observed by sensors located at strategic locations on and around the target system and would also accurately reproduce the control system and vehicle response. In this way, HMS operation can be exercised over a broad range of conditions to verify that it meets requirements for accurate, timely response to actual faults with adequate margin against false and missed detections. An overview is also presented of a real-time rocket propulsion health management system laboratory which is available for future rocket engine programs. The health management elements and approaches of this lab are directly applicable for future space systems. In this paper the various components are discussed and the general fault detection, diagnosis, isolation and the response (FDIR) concept is presented. Additionally, the complexities of V&V (Verification and Validation) for advanced algorithms and the simulation capabilities required to meet the changing state-of-the-art in HMS are discussed.
NASA Technical Reports Server (NTRS)
Clarke, R.; Lintereur, L.; Bahm, C.
2016-01-01
A desire for more complete documentation of the National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC), Edwards, California legacy code used in the core simulation has led to this e ort to fully document the oblate Earth six-degree-of-freedom equations of motion and integration algorithm. The authors of this report have taken much of the earlier work of the simulation engineering group and used it as a jumping-o point for this report. The largest addition this report makes is that each element of the equations of motion is traced back to first principles and at no point is the reader forced to take an equation on faith alone. There are no discoveries of previously unknown principles contained in this report; this report is a collection and presentation of textbook principles. The value of this report is that those textbook principles are herein documented in standard nomenclature that matches the form of the computer code DERIVC. Previous handwritten notes are much of the backbone of this work, however, in almost every area, derivations are explicitly shown to assure the reader that the equations which make up the oblate Earth version of the computer routine, DERIVC, are correct.
Head Kinematics Resulting from Simulated Blast Loading Scenarios
2012-09-17
pressure wave and the body which commonly damages air-filled organs such as the lungs , gastrointestinal tract, and ears. Secondary blast injury...subsequent impact with surrounding obstacles or the ground. Quaternary injury is the result of other factors including burns or inhalation of dust and gas... Woods , W., Feldman, S., Cummings, T., et al. (2011). Survival Risk Assessment for Primary Blast Exposures to the Head. Journal of neurotrauma, 2328
Diffusion of emergency warning: Comparing empirical and simulation results
Rogers, G.O.; Sorensen, J.H.
1988-10-01
As officials consider emergency warning systems to alert the public to potential danger in areas surrounding hazardous facilities, the issue of warning system effectiveness is of critical importance. The purpose of this paper is to present the results of an analysis on the timing of warning system information dissemination including the alert of the public and delivery of a warning message. A general model of the diffusion of emergency warning is specified as a logistic function. Alternative warning systems are characterized in terms of the parameters of the model, which generally constrain the diffusion process to account for judged maximum penetration of each system for various locations and likelihood of being in those places by time of day. The results indicate that the combination of either telephone ring-down warning systems or tone-alert radio systems combined with sirens provide the most effective warning system under conditions of either very rapid onset, or close proximity or both. These results indicate that single technology systems provide adequate warning effectiveness when available warning time (to the public after detection and the decision to warn) extends to as much as an hour. Moreover, telephone ring-down systems provide similar coverage at approximately 30 minutes of available public warning time. 36 refs., 5 figs., 3 tabs.
Aeolian Simulations: A Comparison of Numerical and Experimental Results
NASA Astrophysics Data System (ADS)
Mathews, O.; Burr, D. M.; Bridges, N. T.; Lyne, J. E.; Marshall, J. R.; Greeley, R.; White, B. R.; Hills, J.; Smith, K.; Prissel, T. C.; Aliaga-Caro, J. F.
2010-12-01
Aeolian processes are a major geomorphic agent on solid planetary bodies with atmospheres (Earth, Mars, Venus, and Titan). This paper describes preliminary efforts to model aeolian saltation using computational fluid dynamics (CFD) and to compare the results with those obtained in wind tunnel testing conducted in the Planetary Aeolian Laboratory at NASA Ames Research Center at ambient pressure. The end goal of the project is to develop an experimentally validated CFD approach for modeling aeolian sediment transport on Titan and other planetary bodies. The MARSWIT open-circuit tunnel in this work was specifically designed for atmospheric boundary layer studies. It is a variable-speed, continuous flow tunnel with a test section 1.0 m by 1.2 m in size; the tunnel is able to operate at pressures from 10 millibar to one atmosphere. Flow trips near the tunnel inlet ensure a fully developed, turbulent boundary layer in the test section. Wind speed and axial velocity profiles can be measured with a traversing pitot tube. In this study, sieved walnut shell particles (Greeley et al. 1976) with a density of ~1.1 g/cm3 were used to correlate the low gravity conditions and low sediment density on a body of interest to that of Earth. This sediment was placed in the tunnel, and the freestream airspeed raised to 5.4 m/s. A Phantom v12 camera imaged the resulting particle motion at 1000 frames per second, which was analyzed with ImageJ open-source software (Fig. 1). Airflow in the tunnel was modeled with FLUENT, a commercial CFD program. The turbulent scheme used in FLUENT to obtain closed-form solutions to the Navier-Stokes equations was a 1st Order, k-epsilon model. These methods produced computational velocity profiles that agree with experimental data to within 5-10%. Once modeling of the flow field had been achieved, a Euler-Lagrangian scheme was employed, treating the particles as spheres and tracking each particle at its center. The particles are assumed to interact with
Numerical Simulation of Micronozzles with Comparison to Experimental Results
NASA Astrophysics Data System (ADS)
Thornber, B.; Chesta, E.; Gloth, O.; Brandt, R.; Schwane, R.; Perigo, D.; Smith, P.
2004-10-01
A numerical analysis of conical micronozzle flows has been conducted using the commercial software package CFD-RC FASTRAN [13]. The numerical results have been validated by comparison with direct thrust and mass flow measurements recently performed in ESTEC Propulsion Laboratory on Polyflex Space Ltd. 10mN Cold-Gas thrusters in the frame of ESA CryoSat mission. The flow is viscous dominated, with a throat Reynolds number of 5000, and the relatively large length of the nozzle causes boundary layer effects larger than usual for nozzles of this size. This paper discusses in detail the flow physics such as boundary layer growth and structure, and the effects of rarefaction. Furthermore a number of different domain sizes and exit boundary conditions are used to determine the optimum combination of computational time and accuracy.
Larsen, Ross E; Bedard-Hearn, Michael J; Schwartz, Benjamin J
2006-10-12
Mixed quantum/classical (MQC) molecular dynamics simulation has become the method of choice for simulating the dynamics of quantum mechanical objects that interact with condensed-phase systems. There are many MQC algorithms available, however, and in cases where nonadiabatic coupling is important, different algorithms may lead to different results. Thus, it has been difficult to reach definitive conclusions about relaxation dynamics using nonadiabatic MQC methods because one is never certain whether any given algorithm includes enough of the necessary physics. In this paper, we explore the physics underlying different nonadiabatic MQC algorithms by comparing and contrasting the excited-state relaxation dynamics of the prototypical condensed-phase MQC system, the hydrated electron, calculated using different algorithms, including: fewest-switches surface hopping, stationary-phase surface hopping, and mean-field dynamics with surface hopping. We also describe in detail how a new nonadiabatic algorithm, mean-field dynamics with stochastic decoherence (MF-SD), is to be implemented for condensed-phase problems, and we apply MF-SD to the excited-state relaxation of the hydrated electron. Our discussion emphasizes the different ways quantum decoherence is treated in each algorithm and the resulting implications for hydrated-electron relaxation dynamics. We find that for three MQC methods that use Tully's fewest-switches criterion to determine surface hopping probabilities, the excited-state lifetime of the electron is the same. Moreover, the nonequilibrium solvent response function of the excited hydrated electron is the same with all of the nonadiabatic MQC algorithms discussed here, so that all of the algorithms would produce similar agreement with experiment. Despite the identical solvent response predicted by each MQC algorithm, we find that MF-SD allows much more mixing of multiple basis states into the quantum wave function than do other methods. This leads to an
NASA Astrophysics Data System (ADS)
Ameli, P.; Detwiler, R. L.; Elkhoury, J. E.; Morris, J. P.
2012-12-01
Fractures are often the main pathways for subsurface fluid flow especially in rocks with low matrix porosity. Therefore, the hydro-mechanical properties of fractures are of fundamental concern for subsurface CO2 sequestration, enhanced geothermal energy production, enhanced oil recovery, and nuclear waste disposal. Chemical and mechanical stresses induced during these applications may lead to significant alteration of the hydro-mechanical properties of fractures. Laboratory experiments aimed at understanding the chemo-hydro-mechanical response of fractures have shown a range of results that contradict simple conceptual models. For example, under conditions favoring mineral dissolution, where one would expect an overall increase in permeability and fracture aperture, permeability increases under some conditions and decreases under others. Recent experiments have attempted to link these core-scale observations to the relevant small-scale processes occurring within fractures. Results suggest that the loss of mechanical strength in asperities due to chemical alteration may cause non-uniform deformation and alteration of fracture apertures. However, it remains difficult to directly measure the coupled chemical and mechanical processes that lead to alteration of contacting fracture surfaces, which challenges our ability to predict the long-term evolution of the hydro-mechanical properties of fractures. Here, we present a computational model that uses micro-scale surface roughness and explicitly couples dissolution and elastic deformation to calculate local alterations in fracture aperture under chemical and mechanical stresses. Chemical alteration of the fracture surfaces is modeled using a depth-averaged algorithm of fracture flow and reactive transport. Then, we deform the resulting altered fracture-surfaces using an algorithm that calculates the elastic deformation. Nonuniform dissolution may cause the location of the resultant force between the two contacting
NASA Astrophysics Data System (ADS)
Yang, Tiantian; Gao, Xiaogang; Sorooshian, Soroosh; Li, Xin
2016-03-01
The controlled outflows from a reservoir or dam are highly dependent on the decisions made by the reservoir operators, instead of a natural hydrological process. Difference exists between the natural upstream inflows to reservoirs and the controlled outflows from reservoirs that supply the downstream users. With the decision maker's awareness of changing climate, reservoir management requires adaptable means to incorporate more information into decision making, such as water delivery requirement, environmental constraints, dry/wet conditions, etc. In this paper, a robust reservoir outflow simulation model is presented, which incorporates one of the well-developed data-mining models (Classification and Regression Tree) to predict the complicated human-controlled reservoir outflows and extract the reservoir operation patterns. A shuffled cross-validation approach is further implemented to improve CART's predictive performance. An application study of nine major reservoirs in California is carried out. Results produced by the enhanced CART, original CART, and random forest are compared with observation. The statistical measurements show that the enhanced CART and random forest overperform the CART control run in general, and the enhanced CART algorithm gives a better predictive performance over random forest in simulating the peak flows. The results also show that the proposed model is able to consistently and reasonably predict the expert release decisions. Experiments indicate that the release operation in the Oroville Lake is significantly dominated by SWP allocation amount and reservoirs with low elevation are more sensitive to inflow amount than others.
NASA Astrophysics Data System (ADS)
Kuo, Chin-Hwa; Michel, Anthony N.; Gray, William G.
The problem of the placement of pumps and the selection of pumping rates are the most important issues in designing contaminated groundwater remediation systems using a pump-and-treat strategy. Three nonlinear optimization formulations are proposed to address these problems. The first problem formulation considers hydraulic constraints and reduces the plume concentration to a specified regulation standard value within a given planning time while minimizing capital cost. The second formulation minimizes residual contaminant in a fixed period under hydraulic contraints only. The third formulation is similar to the second formulation; however, in this formulation the number of pumps is prespecified by using the results from the first formulation. The inclusion of well installation costs in the first problem formulation results in a nonsmooth objective function. For such problems, only local optimum solutions can be expected by the use of conventional nonlinear optimization techniques. In the present paper, the simulated annealing algorithm is used to overcome these difficulties. Specific simulation studies indicate that the method advanced herein is promising and involves acceptable computation times.
2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation
Warren, Michael S.
2014-01-01
We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (2 18 ) processors. We present error analysis and scientific application results from a series of more than ten 69 billion (4096 3 ) particle cosmological simulations, accounting for 4×10 20 floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracymore » and scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.« less
Chiang, Yun-Wei; Freed, Jack H.
2011-01-01
The Lanczos algorithm (LA) is a useful iterative method for the reduction of a large matrix to tridiagonal form. It is a storage efficient procedure requiring only the preceding two Lanczos vectors to compute the next. The quasi-minimal residual (QMR) method is a powerful method for the solution of linear equation systems, Ax = b. In this report we provide another application of the QMR method: we incorporate QMR into the LA to monitor the convergence of the Lanczos projections in the reduction of large sparse matrices. We demonstrate that the combined approach of the LA and QMR can be utilized efficiently for the orthogonal transformation of large, but sparse, complex, symmetric matrices, such as are encountered in the simulation of slow-motional 1D- and 2D-electron spin resonance (ESR) spectra. Especially in the 2D-ESR simulations, it is essential that we store all of the Lanczos vectors obtained in the course of the LA recursions and maintain their orthogonality. In the LA-QMR application, the QMR weight matrix mitigates the problem that the Lanczos vectors lose orthogonality after many LA projections. This enables substantially more Lanczos projections, as required to achieve convergence for the more challenging ESR simulations. It, therefore, provides better accuracy for the eigenvectors and the eigenvalues of the large sparse matrices originating in 2D-ESR simulations than does the previously employed method, which is a combined approach of the LA and the conjugate-gradient (CG) methods, as evidenced by the quality and convergence of the 2D-ESR simulations. Our results show that very slow-motional 2D-ESR spectra at W-band (95 GHz) can be reliably simulated using the LA-QMR method, whereas the LA-CG consistently fails. The improvements due to the LA-QMR are of critical importance in enabling the simulation of high-frequency 2D-ESR spectra, which are characterized by their very high resolution to molecular orientation. PMID:21261335
Chiang, Yun-Wei; Freed, Jack H
2011-01-21
The Lanczos algorithm (LA) is a useful iterative method for the reduction of a large matrix to tridiagonal form. It is a storage efficient procedure requiring only the preceding two Lanczos vectors to compute the next. The quasi-minimal residual (QMR) method is a powerful method for the solution of linear equation systems, Ax = b. In this report we provide another application of the QMR method: we incorporate QMR into the LA to monitor the convergence of the Lanczos projections in the reduction of large sparse matrices. We demonstrate that the combined approach of the LA and QMR can be utilized efficiently for the orthogonal transformation of large, but sparse, complex, symmetric matrices, such as are encountered in the simulation of slow-motional 1D- and 2D-electron spin resonance (ESR) spectra. Especially in the 2D-ESR simulations, it is essential that we store all of the Lanczos vectors obtained in the course of the LA recursions and maintain their orthogonality. In the LA-QMR application, the QMR weight matrix mitigates the problem that the Lanczos vectors lose orthogonality after many LA projections. This enables substantially more Lanczos projections, as required to achieve convergence for the more challenging ESR simulations. It, therefore, provides better accuracy for the eigenvectors and the eigenvalues of the large sparse matrices originating in 2D-ESR simulations than does the previously employed method, which is a combined approach of the LA and the conjugate-gradient (CG) methods, as evidenced by the quality and convergence of the 2D-ESR simulations. Our results show that very slow-motional 2D-ESR spectra at W-band (95 GHz) can be reliably simulated using the LA-QMR method, whereas the LA-CG consistently fails. The improvements due to the LA-QMR are of critical importance in enabling the simulation of high-frequency 2D-ESR spectra, which are characterized by their very high resolution to molecular orientation.
NASA Astrophysics Data System (ADS)
Chiang, Yun-Wei; Freed, Jack H.
2011-01-01
The Lanczos algorithm (LA) is a useful iterative method for the reduction of a large matrix to tridiagonal form. It is a storage efficient procedure requiring only the preceding two Lanczos vectors to compute the next. The quasi-minimal residual (QMR) method is a powerful method for the solution of linear equation systems, Ax = b. In this report we provide another application of the QMR method: we incorporate QMR into the LA to monitor the convergence of the Lanczos projections in the reduction of large sparse matrices. We demonstrate that the combined approach of the LA and QMR can be utilized efficiently for the orthogonal transformation of large, but sparse, complex, symmetric matrices, such as are encountered in the simulation of slow-motional 1D- and 2D-electron spin resonance (ESR) spectra. Especially in the 2D-ESR simulations, it is essential that we store all of the Lanczos vectors obtained in the course of the LA recursions and maintain their orthogonality. In the LA-QMR application, the QMR weight matrix mitigates the problem that the Lanczos vectors lose orthogonality after many LA projections. This enables substantially more Lanczos projections, as required to achieve convergence for the more challenging ESR simulations. It, therefore, provides better accuracy for the eigenvectors and the eigenvalues of the large sparse matrices originating in 2D-ESR simulations than does the previously employed method, which is a combined approach of the LA and the conjugate-gradient (CG) methods, as evidenced by the quality and convergence of the 2D-ESR simulations. Our results show that very slow-motional 2D-ESR spectra at W-band (95 GHz) can be reliably simulated using the LA-QMR method, whereas the LA-CG consistently fails. The improvements due to the LA-QMR are of critical importance in enabling the simulation of high-frequency 2D-ESR spectra, which are characterized by their very high resolution to molecular orientation.
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
NASA Astrophysics Data System (ADS)
Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen
2015-04-01
This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive
Lunar Regolith Characterization for Simulant Design and Evaluation using Figure of Merit Algorithms
NASA Technical Reports Server (NTRS)
Schrader, Christian M.; Rickman, Douglas L.; Melemore, Carole A.; Fikes, John C.; Stoeser, Douglas B.; Wentworth, Susan J.; McKay, David S.
2009-01-01
NASA's Marshall Space Flight Center (MSFC), in conjunction with the United States Geological Survey (USGS) and aided by personnel from the Astromaterials Research and Exploration Science group at Johnson Space Center (ARES-JSC), is implementing a new data acquisition strategy to support the development and evaluation of lunar regolith simulants. The first analyses of lunar regolith samples by the simulant group were carried out in early 2008 on samples from Apollo 16 core 64001/64002. The results of these analyses are combined with data compiled from the literature to generate a reference composition and particle size distribution (PSD)) for lunar highlands regolith. In this paper we present the specifics of particle type composition and PSD for this reference composition. Furthermore. we use Figure-of-Merit (FoM) routines to measure the characteristics of a number of lunar regolith simulants against this reference composition. The lunar highlands regolith reference composition and the FoM results are presented to guide simulant producers and simulant users in their research and development processes.
NASA Astrophysics Data System (ADS)
Ki, Won-Tai; Choi, Ji-Hyeon; Kim, Byung-Gook; Woo, Sang-Gyun; Cho, Han-Ku
2008-05-01
As the design rule with wafer process is getting smaller down below 50nm node, the specification of CDs on a mask is getting more tightened. Therefore, more tight and accurate E-Beam Lithography simulation is highly required in these days. However, in reality most of E-Beam simulation cases, there is a trade-off relationship between the accuracy and the simulation speed. Moreover, the necessity of full chip based simulation has been increasing in order to estimate more accurate mask CDs based on real process condition. Therefore, without consideration of long range correction algorithm such as fogging effect and loading effect correction in E-beam machine, it would be impossible and meaningless to pursue the full chip based simulation. In this paper, we introduce a breakthrough method to overcome the obstacles of E-Beam simulation. In-house E-beam simulator, ELIS (E-beam LIthography Simulator), has been upgraded to solve these problems. First, DP (Distributed Processing) strategy was applied to improve calculation speed. Secondly, the long range correction algorithm of E-beam machine was also applied to compute intensity of exposure on a full chip based (Mask). Finally, ELIS-DP has been evaluated possibility of expecting or analyzing CDs on full chip base.
Dual energy exposure control (DEEC) for computed tomography: Algorithm and simulation study
Stenner, Philip; Kachelriess, Marc
2008-11-15
DECT means acquiring the same object at two different energies, respectively two different tube voltages U{sub 1} and U{sub 2}. The raw data q{sub 1} and q{sub 2} undergo a decomposition process of type p=p(q{sub 1},q{sub 2}). The raw data p are reconstructed to obtain monochromatic images of the attenuation {mu}, of the object density {rho}, or of a specific material distribution. Recent advances in DECT focus on noise reduction techniques [S. Richard and J. H. Siewerdsen, Med. Phys. 35(2), 586-600 (2008)] and enable high performance DECT such as lung nodule detection [Shkumat et al., Med. Phys. 35(2), 629-632 (2008)]. Given p and a raw data-based projection-wise patient dose estimation D({alpha}) the authors determine the optimal tube current curves I{sub 1}({alpha}) and I{sub 2}({alpha}), with {alpha} being the view angle, which minimizes image noise for a given patient dose level. DEEC can perform online; I{sub 1}({alpha}) and I{sub 2}({alpha}) can be determined during the scan. Simulation studies using semianthropomorphic phantom data were carried out. In particular, functions p that generate {mu}-images and density images were evaluated. Image quality was compared to standard scans at U{sub 0}=120 kV (clinical CT) and U{sub 0}=45 kV (micro-CT) that were taken at the same dose level (D{sub 0}=D{sub 1}+D{sub 2}) and identical spatial resolution. Appropriate choice of p(q{sub 1},q{sub 2}) allows to obtain {mu}-images that show fewer artifacts and yield image noise levels comparable to the noise of the standard scan. The authors compared the standard scan to {mu}-images at 70 keV, which is the effective energy used in clinical CT, and found optimal results with {mu}-images at 25 keV for micro-CT. Nonoptimal choice of the decomposition function will, however, significantly increase image noise. In particular {mu}-images at 511 keV, as needed for PET/CT attenuation correction, exhibit more than twice as much image noise as the standard scan. With DEEC, which
Comparison of image deconvolution algorithms on simulated and laboratory infrared images
Proctor, D.
1994-11-15
We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.
Configuration of the electron transport algorithm of PENELOPE to simulate ion chambers.
Sempau, J; Andreo, P
2006-07-21
The stability of the electron transport algorithm implemented in the Monte Carlo code PENELOPE with respect to variations of its step length is analysed in the context of the simulation of ion chambers used in photon and electron dosimetry. More precisely, the degree of violation of the Fano theorem is quantified (to the 0.1% level) as a function of the simulation parameters that determine the step size. To meet the premises of the theorem, we define an infinite graphite phantom with a cavity delimited by two parallel planes (i.e., a slab) and filled with a 'gas' that has the same composition as graphite but a mass density a thousand-fold smaller. The cavity walls and the gas have identical cross sections, including the density effect associated with inelastic collisions. Electrons with initial kinetic energies equal to 0.01, 0.1, 1, 10 or 20 MeV are generated in the wall and in the gas with a uniform intensity per unit mass. Two configurations, motivated by the design of pancake- and thimble-type chambers, are considered, namely, with the initial direction of emission perpendicular or parallel to the gas-wall interface. This version of the Fano test avoids the need of photon regeneration and the calculation of photon energy absorption coefficients, two ingredients that are common to some alternative definitions of equivalent tests. In order to reduce the number of variables in the analysis, a global new simulation parameter, called the speedup parameter (a), is introduced. It is shown that setting a = 0.2, corresponding to values of the usual PENELOPE parameters of C1 = C2 = 0.02 and values of WCC and WCR that depend on the initial and absorption energies, is appropriate for maximum tolerances of the order of 0.2% with respect to an analogue, i.e., interaction-by-interaction, simulation of the same problem. The precise values of WCC and WCR do not seem to be critical to achieve this level of accuracy. The step-size dependence of the absorbed dose is explained in
Determining the Complexity of the Quantum Adiabatic Algorithm using Quantum Monte Carlo Simulations
2012-12-18
efficiently a quantum computer could solve optimization problems using the quantum adiabatic algorithm (QAA). Comparisons were made with a classical...Park, NC 27709-2211 15. SUBJECT TERMS Quantum Adiabatic Algorithm , Optimization, Monte Carlo, quantum computer, satisfiability problems, spin glass... quantum adiabatic algorithm (QAA). Comparisons were made with a classical heuristic algorithm , WalkSAT. A preliminary study was also made to see if the
Hasegawa, Taisuke
2016-11-07
We propose a novel molecular dynamics (MD) algorithm for approximately dealing with a nuclear quantum dynamics in a real-time MD simulation. We have found that real-time dynamics of the ensemble of classical particles acquires quantum nature by introducing a constant quantum mechanical uncertainty constraint on its classical dynamics. The constant uncertainty constraint is handled by the Lagrange multiplier method and implemented into a conventional MD algorithm. The resulting constant uncertainty molecular dynamics (CUMD) is applied to the calculation of quantum position autocorrelation functions on quartic and Morse potentials. The test calculations show that CUMD gives better performance than ring-polymer MD because of the inclusion of the quantum zero-point energy during real-time evolution as well as the quantum imaginary-time statistical effect stored in an initial condition. The CUMD approach will be a possible starting point for new real-time quantum dynamics simulation in condensed phase.
NASA Astrophysics Data System (ADS)
Hasegawa, Taisuke
2016-11-01
We propose a novel molecular dynamics (MD) algorithm for approximately dealing with a nuclear quantum dynamics in a real-time MD simulation. We have found that real-time dynamics of the ensemble of classical particles acquires quantum nature by introducing a constant quantum mechanical uncertainty constraint on its classical dynamics. The constant uncertainty constraint is handled by the Lagrange multiplier method and implemented into a conventional MD algorithm. The resulting constant uncertainty molecular dynamics (CUMD) is applied to the calculation of quantum position autocorrelation functions on quartic and Morse potentials. The test calculations show that CUMD gives better performance than ring-polymer MD because of the inclusion of the quantum zero-point energy during real-time evolution as well as the quantum imaginary-time statistical effect stored in an initial condition. The CUMD approach will be a possible starting point for new real-time quantum dynamics simulation in condensed phase.
ERIC Educational Resources Information Center
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
A Result on the Computational Complexity of Heuristic Estimates for the A Algorithm.
1983-01-01
compare these algorithms according to the criterion ’number of node expansions," which is discussed and general - ly accepted in the published...alla Teoria doi Problemi." i i 4 a..... . - 22 - P__ e£jnjs of AICA 1980, Bari, Italy, 177-193 (in Italian). [HNRaph68] Hart, Peter A., Nils J. Nilsson...Intoijigence, 15 (1980), pp. 241-254. [Kibler82] Kibler, Dennis. "Natural Generation of Admissible Heuristics." Technical Report TR-188, Information and
Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.
2010-06-15
A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.
Baldewijns, Greet; Debard, Glen; Mertes, Gert; Vanrumste, Bart; Croonenborghs, Tom
2016-03-01
Fall incidents are an important health hazard for older adults. Automatic fall detection systems can reduce the consequences of a fall incident by assuring that timely aid is given. The development of these systems is therefore getting a lot of research attention. Real-life data which can help evaluate the results of this research is however sparse. Moreover, research groups that have this type of data are not at liberty to share it. Most research groups thus use simulated datasets. These simulation datasets, however, often do not incorporate the challenges the fall detection system will face when implemented in real-life. In this Letter, a more realistic simulation dataset is presented to fill this gap between real-life data and currently available datasets. It was recorded while re-enacting real-life falls recorded during previous studies. It incorporates the challenges faced by fall detection algorithms in real life. A fall detection algorithm from Debard et al. was evaluated on this dataset. This evaluation showed that the dataset possesses extra challenges compared with other publicly available datasets. In this Letter, the dataset is discussed as well as the results of this preliminary evaluation of the fall detection algorithm. The dataset can be downloaded from www.kuleuven.be/advise/datasets.
NASA Astrophysics Data System (ADS)
Yeh, Mei-Ling
We have performed a parallel decomposition of the fictitious Lagrangian method for molecular dynamics with tight-binding total energy expression into the hypercube computer. This is the first time in literature that the dynamical simulation of semiconducting systems containing more than 512 silicon atoms has become possible with the electrons treated as quantum particles. With the utilization of the Intel Paragon system, our timing analysis predicts that our code is expected to perform realistic simulations on very large systems consisting of thousands of atoms with time requirements of the order of tens of hours. Timing results and performance analysis of our parallel code are presented in terms of calculation time, communication time, and setup time. The accuracy of the fictitious Lagrangian method in molecular dynamics simulation is also investigated, especially the energy conservation of the total energy of ions. We find that the accuracy of the fictitious Lagrangian scheme in small silicon cluster and very large silicon system simulations is good for as long as the simulations proceed, even though we quench the electronic coordinates to the Born-Oppenheimer surface only in the beginning of the run. The kinetic energy of electrons does not increase as time goes on, and the energy conservation of the ionic subsystem remains very good. This means that, as far as the ionic subsystem is concerned, the electrons are on the average in the true quantum ground states. We also tie up some odds and ends regarding a few remaining questions about the fictitious Lagrangian method, such as the difference between the results obtained from the Gram-Schmidt and SHAKE method of orthonormalization, and differences between simulations where the electrons are quenched to the Born -Oppenheimer surface only once compared with periodic quenching.
NASA Astrophysics Data System (ADS)
Innocenti, Maria Elena; Beck, Arnaud; Markidis, Stefano; Lapenta, Giovanni
2013-10-01
Particle in Cell (PIC) simulations of plasmas are not bound anymore by the stability constraints of explicit algorithms. Semi implicit and fully implicit methods allow to use larger grid spacings and time steps. Adaptive Mesh Refinement (AMR) techniques permit to locally change the simulation resolution. The code proposed in Innocenti et al., 2013 and Beck et al., 2013 is however the first to combine the advantages of both. The use of the Implicit Moment Method allows to taylor the resolution used in each level to the physical scales of interest and to use high Refinement Factors (RF) between the levels. The Multi Level Multi Domain (MLMD) structure, where all levels are simulated as complete domains, conjugates algorithmic and practical advantages. The different levels evolve according to the local dynamics and achieve optimal level interlocking. Also, the capabilities of the Object Oriented programming model are fully exploited. The MLMD algorithm is demonstrated with magnetic reconnection and collisionless shocks simulations with very high RFs between the levels. Notable computational gains are achieved with respect to simulations performed on the entire domain with the higher resolution. Beck A. et al. (2013). submitted. Innocenti M. E. et al. (2013). JCP, 238(0):115-140.
Spencer, W.A.; Goode, S.R.
1997-10-01
ICP emission analyses are prone to errors due to changes in power level, nebulization rate, plasma temperature, and sample matrix. As a result, accurate analyses of complex samples often require frequent bracketing with matrix matched standards. Information needed to track and correct the matrix errors is contained in the emission spectrum. But most commercial software packages use only the analyte line emission to determine concentrations. Changes in plasma temperature and the nebulization rate are reflected by changes in the hydrogen line widths, the oxygen emission, and neutral ion line ratios. Argon and off-line emissions provide a measure to correct the power level and the background scattering occurring in the polychromator. The authors` studies indicated that changes in the intensity of the Ar 404.4 nm line readily flag most matrix and plasma condition modifications. Carbon lines can be used to monitor the impact of organics on the analyses and calcium and argon lines can be used to correct for spectral drift and alignment. Spectra of contaminated groundwater and simulated defense waste glasses were obtained using a Thermo Jarrell Ash ICP that has an echelle CID detector system covering the 190-850 nm range. The echelle images were translated to the FITS data format, which astronomers recommend for data storage. Data reduction packages such as those in the ESO-MIDAS/ECHELLE and DAOPHOT programs were tried with limited success. The radial point spread function was evaluated as a possible improved peak intensity measurement instead of the common pixel averaging approach used in the commercial ICP software. Several algorithms were evaluated to align and automatically scale the background and reference spectra. A new data reduction approach that utilizes standard reference images, successive subtractions, and residual analyses has been evaluated to correct for matrix effects.
Cline, K; Narayanasamy, G; Obediat, M; Stanley, D; Stathakis, S; Kirby, N; Kim, H
2015-06-15
Purpose: Deformable image registration (DIR) is used routinely in the clinic without a formalized quality assurance (QA) process. Using simulated deformations to digitally deform images in a known way and comparing to DIR algorithm predictions is a powerful technique for DIR QA. This technique must also simulate realistic image noise and artifacts, especially between modalities. This study developed an algorithm to create simulated daily kV cone-beam computed-tomography (CBCT) images from CT images for DIR QA between these modalities. Methods: A Catphan and physical head-and-neck phantom, with known deformations, were used. CT and kV-CBCT images of the Catphan were utilized to characterize the changes in Hounsfield units, noise, and image cupping that occur between these imaging modalities. The algorithm then imprinted these changes onto a CT image of the deformed head-and-neck phantom, thereby creating a simulated-CBCT image. CT and kV-CBCT images of the undeformed and deformed head-and-neck phantom were also acquired. The Velocity and MIM DIR algorithms were applied between the undeformed CT image and each of the deformed CT, CBCT, and simulated-CBCT images to obtain predicted deformations. The error between the known and predicted deformations was used as a metric to evaluate the quality of the simulated-CBCT image. Ideally, the simulated-CBCT image registration would produce the same accuracy as the deformed CBCT image registration. Results: For Velocity, the mean error was 1.4 mm for the CT-CT registration, 1.7 mm for the CT-CBCT registration, and 1.4 mm for the CT-simulated-CBCT registration. These same numbers were 1.5, 4.5, and 5.9 mm, respectively, for MIM. Conclusion: All cases produced similar accuracy for Velocity. MIM produced similar values of accuracy for CT-CT registration, but was not as accurate for CT-CBCT registrations. The MIM simulated-CBCT registration followed this same trend, but overestimated MIM DIR errors relative to the CT
Wilson, William Edward
1977-01-01
A digital model of two-dimensional ground-water flow was used to simulate projected changes in the Floridan aquifer potentiometric surface in 1985 and 2000, resulting from proposed ground-water developments by the phosphate mining industry in west-central Florida. The .model was calibrated under steady-state conditions to simulate the September 1975 potentiometric surface. Under one development plan, existing phosphate mines in Polk County would continue to withdraw ground water at 1975 rates, until phased out as the ore is depleted; no new mines would be introduced. Preliminary results indicate that under this plan, maximum simulated recovery of the potentiometric surface is 11.9 feet by 1985 and 36.5 feet by 2000. Under an alternative plan, all proposed mines in Polk, Hardee, DeSoto, Hillsborough and Manatee Counties would begin operations: in addition to the continuation and phasing out of existing mines. Preliminary results indicate that the potentiometric surface would generally recover in Polk County and decline elsewhere in the modeled area. Maximum simulated recovery is 4.5 feet by 1985 and 29.6 feet by 2000; maximum simulated drawdown is 15.1 feet by 1985 and feet by 2000. All results are preliminary and subject to revision as the investigation continues.