Science.gov

Sample records for algorithms simulation results

  1. Simulation results for the Viterbi decoding algorithm

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Moorehead, R. W.; Taqvi, S. Z. H.

    1972-01-01

    Concepts involved in determining the performance of coded digital communications systems are introduced. The basic concepts of convolutional encoding and decoding are summarized, and hardware implementations of sequential and maximum likelihood decoders are described briefly. Results of parametric studies of the Viterbi decoding algorithm are summarized. Bit error probability is chosen as the measure of performance and is calculated, by using digital computer simulations, for various encoder and decoder parameters. Results are presented for code rates of one-half and one-third, for constraint lengths of 4 to 8, for both hard-decision and soft-decision bit detectors, and for several important systematic and nonsystematic codes. The effect of decoder block length on bit error rate also is considered, so that a more complete estimate of the relationship between performance and decoder complexity can be made.

  2. Simulation Results of the Huygens Probe Entry and Descent Trajectory Reconstruction Algorithm

    NASA Technical Reports Server (NTRS)

    Kazeminejad, B.; Atkinson, D. H.; Perez-Ayucar, M.

    2005-01-01

    Cassini/Huygens is a joint NASA/ESA mission to explore the Saturnian system. The ESA Huygens probe is scheduled to be released from the Cassini spacecraft on December 25, 2004, enter the atmosphere of Titan in January, 2005, and descend to Titan s surface using a sequence of different parachutes. To correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for "ground-truthing" Orbiter remote sensing measurements, it is essential that the probe entry and descent trajectory reconstruction be performed as early as possible in the postflight data analysis phase. The Huygens Descent Trajectory Working Group (DTWG), a subgroup of the Huygens Science Working Team (HSWT), is responsible for developing a methodology and performing the entry and descent trajectory reconstruction. This paper provides an outline of the trajectory reconstruction methodology, preliminary probe trajectory retrieval test results using a simulated synthetic Huygens dataset developed by the Huygens Project Scientist Team at ESA/ESTEC, and a discussion of strategies for recovery from possible instrument failure.

  3. Profiling Wind and Greenhouse Gases by Infrared-laser Occultation: Algorithm and Results from Simulations in Windy Air

    NASA Astrophysics Data System (ADS)

    Plach, Andreas; Proschek, Veronika; Kirchengast, Gottfried

    2014-05-01

    We employ the Low Earth Orbit (LEO-LEO) microwave and infrared-laser occultation (LMIO) method to derive a full set of thermodynamic state variables from microwave signals and climate benchmark profiling of greenhouse gases (GHGs) and line-of-sight (l.o.s.) wind using infrared-laser signals. The focus lies on the upper troposphere/lower stratosphere region (UTLS - 5 km to 35 km). The GHG retrieval errors are generally smaller than 1% to 3% r.m.s., at a vertical resolution of about 1 km. In this study we focus on the infrared-laser part of LMIO, where we introduce a new, advanced wind retrieval algorithm to derive accurate l.o.s. wind profiles. The wind retrieval uses the reasonable assumption of the wind blowing along spherical shells (horizontal winds) and therefore the l.o.s. wind speed can be retrieved by using an Abel integral transform. A 'delta-differential transmission' principle is applied to two thoroughly selected infrared-laser signals placed at the wings of the highly symmetric C18OO absorption line (nominally ±0.004 cm-1 from the line center near 4767 cm-1) plus a related 'off-line' reference signal. The delta-differential transmission obtained by differencing these signals is clear from atmospheric broadband effects and is proportional to the wind-induced Doppler shift; it serves as the integrand of the Abel transform. The Doppler frequency shift calculated along with the wind retrieval is in turn also used in the GHG retrieval to correct the frequency of GHG-sensitive infrared-laser signals for the wind-induced Doppler shift, which enables improved GHG estimation. This step therefore provides the capability to correct potential wind-induced residual errors of the GHG retrieval in case of strong winds. We performed end-to-end simulations to test the performance of the new retrieval in windy air. The simulations used realistic atmospheric conditions (thermodynamic state variables and wind profiles) from an analysis field of the European Centre for

  4. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  5. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  6. The Results of a Simulator Study to Determine the Effects on Pilot Performance of Two Different Motion Cueing Algorithms and Various Delays, Compensated and Uncompensated

    NASA Technical Reports Server (NTRS)

    Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.

    2003-01-01

    A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.

  7. Formation Algorithms and Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Wette, Matthew; Sohl, Garett; Scharf, Daniel; Benowitz, Edward

    2004-01-01

    Formation flying for spacecraft is a rapidly developing field that will enable a new era of space science. For one of its missions, the Terrestrial Planet Finder (TPF) project has selected a formation flying interferometer design to detect earth-like planets orbiting distant stars. In order to advance technology needed for the TPF formation flying interferometer, the TPF project has been developing a distributed real-time testbed to demonstrate end-to-end operation of formation flying with TPF-like functionality and precision. This is the Formation Algorithms and Simulation Testbed (FAST) . This FAST was conceived to bring out issues in timing, data fusion, inter-spacecraft communication, inter-spacecraft sensing and system-wide formation robustness. In this paper we describe the FAST and show results from a two-spacecraft formation scenario. The two-spacecraft simulation is the first time that precision end-to-end formation flying operation has been demonstrated in a distributed real-time simulation environment.

  8. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  9. Phase unwrapping algorithms in laser propagation simulation

    NASA Astrophysics Data System (ADS)

    Du, Rui; Yang, Lijia

    2013-08-01

    Currently simulating on laser propagation in atmosphere usually need to deal with beam in strong turbulence, which may lose a part of information via Fourier Transform to simulate the transmission, makes the phase of beam as a 2-D array wrap by 2π . An effective unwrapping algorithm is needed for continuing result and faster calculation. The unwrapping algorithms in atmospheric propagation are similar to the unwrapping algorithm in radar or 3-D surface rebuilding, but not the same. In this article, three classic unwrapping algorithms: the block least squares (BLS), mask-cut (MCUT), and the Flynn's minimal discontinuity algorithm (FMD) are tried in wave-front reconstruction simulation. Each of those algorithms are tested 100 times in 6 same conditions, including low(64x64), medium(128x128), and high(256x256) resolution phase array, with and without noises. Compared the results, the conclusions are delivered as follows. The BLS-based algorithm is the fastest, and the result is acceptable in low resolution environment without noise. The MCUT are higher in accuracy, though they are slower with the array resolution increased, and it is sensitive to noise, resulted in large area errors. Flynn's algorithm has the better accuracy, and it occupies large memory in calculation. After all, the article delivered a new algorithm that based on Active on Vertex (AOV) Network, to build a logical graph to cut the search space then find minimal discontinuity solution. The AOV is faster than MCUT in dealing with high resolution phase arrays, and better accuracy as FMD that has been tested.

  10. Feedback algorithm for simulation of multi-segmented cracks

    SciTech Connect

    Chady, T.; Napierala, L.

    2011-06-23

    In this paper, a method for obtaining a three dimensional crack model from a radiographic image is discussed. A genetic algorithm aiming at close simulation of crack's shape is presented. Results obtained with genetic algorithm are compared to those achieved in authors' previous work. The described algorithm has been tested on both simulated and real-life cracks.

  11. The systems biology simulation core algorithm

    PubMed Central

    2013-01-01

    Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941

  12. Clutter discrimination algorithm simulation in pulse laser radar imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule

    2015-10-01

    Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.

  13. Application of genetic algorithms to autopiloting in aerial combat simulation

    NASA Astrophysics Data System (ADS)

    Kim, Dai Hyun; Erwin, Daniel A.; Kostrzewski, Andrew A.; Kim, Jeongdal; Savant, Gajendra D.

    1998-10-01

    An autopilot algorithm that controls a fighter aircraft in simulated aerial combat is presented. A fitness function, whose arguments are the control settings of the simulated fighter, is continuously maximized by a fuzzied genetic algorithm. Results are presented for one-to-one combat simulated on a personal computer. Generalization to many-to-many combat is discussed.

  14. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  15. Machine Protection System algorithm compiler and simulator

    SciTech Connect

    White, G.R.; Sherwin, G.

    1993-04-01

    The Machine Protection System (MPS) component of the SLC`s beam selection system, in which integrated current is continuously monitored and limited to safe levels through careful selection and feedback of the beam repetition rate, is described elsewhere in these proceedings. The novel decision making mechanism by which that system can evaluate ``safe levels,`` and choose an appropriate repetition rate in real-time, is described here. The algorithm that this mechanism uses to make its decision is written in text files and expressed in states of the accelerator and its devices, one file per accelerator region. Before being used, a file is ``compiled`` to a binary format which can be easily processed as a forward-chaining decision tree. It is processed by distributed microcomputers local to the accelerator regions. A parent algorithm evaluates all results, and reports directly to the beam control microprocessor. Operators can test new algorithms, or changes they make to them, with an online graphical NPS simulator.

  16. Wake Vortex Algorithm Scoring Results

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report compares the performance of two models of trailing vortex evolution for which interaction with the ground is not a significant factor. One model uses eddy dissipation rate (EDR) and the other uses the kinetic energy of turbulence fluctuations (TKE) to represent the effect of turbulence. In other respects, the models are nearly identical. The models are evaluated by comparing their predictions of circulation decay, vertical descent, and lateral transport to observations for over four hundred cases from Memphis and Dallas/Fort Worth International Airports. These observations were obtained during deployments in support of NASA's Aircraft Vortex Spacing System (AVOSS). The results of the comparisons show that the EDR model usually performs slightly better than the TKE model.

  17. A simulation algorithm for ultrasound liver backscattered signals.

    PubMed

    Zatari, D; Botros, N; Dunn, F

    1995-11-01

    In this study, we present a simulation algorithm for the backscattered ultrasound signal from liver tissue. The algorithm simulates backscattered signals from normal liver and three different liver abnormalities. The performance of the algorithm has been tested by statistically comparing the simulated signals with corresponding signals obtained from a previous in vivo study. To verify that the simulated signals can be classified correctly we have applied a classification technique based on an artificial neural network. The acoustic features extracted from the spectrum over a 2.5 MHz bandwidth are the attenuation coefficient and the change of speed of sound with frequency (dispersion). Our results show that the algorithm performs satisfactorily. Further testing of the algorithm is conducted by the use of a data acquisition and analysis system designed by the authors, where several simulated signals are stored in memory chips and classified according to their abnormalities. PMID:8560631

  18. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  19. A splitting algorithm for Vlasov simulation with filamentation filtration

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Farrell, W. M.

    1994-01-01

    A Fourier-Fourier transformed version of the splitting algorithm for simulating solutions of the Vlasov-Poisson system of equations is introduced. It is shown that with the inclusion of filamentation filtration in this transformed algorithm it is both faster and more stable than the standard splitting algorithm. It is further shown that in a scalar computer environment this new algorithm is approximately equal in speed and far less noisy than its particle-in-cell counterpart. It is conjectured that in a multiprocessor environment the filtered splitting algorithm would be faster while producing more precise results.

  20. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

    NASA Technical Reports Server (NTRS)

    Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

    2010-01-01

    Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

  1. An exact accelerated stochastic simulation algorithm

    PubMed Central

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-01-01

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present “ER-leap” algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2∕3 power of the number of reaction events in a Galton–Watson process. PMID:19368432

  2. Genetic Algorithms for Digital Quantum Simulations

    NASA Astrophysics Data System (ADS)

    Las Heras, U.; Alvarez-Rodriguez, U.; Solano, E.; Sanz, M.

    2016-06-01

    We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.

  3. A hierarchical exact accelerated stochastic simulation algorithm

    PubMed Central

    Orendorff, David; Mjolsness, Eric

    2012-01-01

    A new algorithm, “HiER-leap” (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled “blocks” and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms. PMID:23231214

  4. Acoustic simulation in architecture with parallel algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiaohong; Zhang, Xinrong; Li, Dan

    2004-03-01

    In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.

  5. Extrapolated gradientlike algorithms for molecular dynamics and celestial mechanics simulations.

    PubMed

    Omelyan, I P

    2006-09-01

    A class of symplectic algorithms is introduced to integrate the equations of motion in many-body systems. The algorithms are derived on the basis of an advanced gradientlike decomposition approach. Its main advantage over the standard gradient scheme is the avoidance of time-consuming evaluations of force gradients by force extrapolation without any loss of precision. As a result, the efficiency of the integration improves significantly. The algorithms obtained are analyzed and optimized using an error-function theory. The best among them are tested in actual molecular dynamics and celestial mechanics simulations for comparison with well-known nongradient and gradient algorithms such as the Störmer-Verlet, Runge-Kutta, Cowell-Numerov, Forest-Ruth, Suzuki-Chin, and others. It is demonstrated that for moderate and high accuracy, the extrapolated algorithms should be considered as the most efficient for the integration of motion in molecular dynamics simulations. PMID:17025782

  6. Genetic Algorithms for Digital Quantum Simulations.

    PubMed

    Las Heras, U; Alvarez-Rodriguez, U; Solano, E; Sanz, M

    2016-06-10

    We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors. PMID:27341220

  7. Computational algorithms for simulations in atmospheric optics.

    PubMed

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors. PMID:27140113

  8. Piloted simulation of an on-board trajectory optimization algorithm

    NASA Technical Reports Server (NTRS)

    Price, D. B.; Calise, A. J.; Moerder, D. D.

    1981-01-01

    This paper will describe a real time piloted simulation of algorithms designed for on-board computation of time-optimal intercept trajectories for an F-8 aircraft. The algorithms, which were derived using singular perturbation theory, generate commands that are displayed to the pilot on flight director needles on the 8-ball. By flying the airplane so as to zero the horizontal and vertical needles, the pilot flies an approximation to a time-optimal intercept trajectory. The various display and computation modes that are available will be described and results will be presented illustrating the performance of the algorithms with a pilot in the loop.

  9. Open cherry picker simulation results

    NASA Technical Reports Server (NTRS)

    Nathan, C. A.

    1982-01-01

    The simulation program associated with a key piece of support equipment to be used to service satellites directly from the Shuttle is assessed. The Open Cherry Picker (OCP) is a manned platform mounted at the end of the remote manipulator system (RMS) and is used to enhance extra vehicular activities (EVA). The results of simulations performed on the Grumman Large Amplitude Space Simulator (LASS) and at the JSC Water Immersion Facility are summarized.

  10. Fast computation algorithms for speckle pattern simulation

    SciTech Connect

    Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru

    2013-11-13

    We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.

  11. Concluding Report: Quantitative Tomography Simulations and Reconstruction Algorithms

    SciTech Connect

    Aufderheide, M B; Martz, H E; Slone, D M; Jackson, J A; Schach von Wittenau, A E; Goodman, D M; Logan, C M; Hall, J M

    2002-02-01

    In this report we describe the original goals and final achievements of this Laboratory Directed Research and Development project. The Quantitative was Tomography Simulations and Reconstruction Algorithms project (99-ERD-015) funded as a multi-directorate, three-year effort to advance the state of the art in radiographic simulation and tomographic reconstruction by improving simulation and including this simulation in the tomographic reconstruction process. Goals were to improve the accuracy of radiographic simulation, and to couple advanced radiographic simulation tools with a robust, many-variable optimization algorithm. In this project, we were able to demonstrate accuracy in X-Ray simulation at the 2% level, which is an improvement of roughly a factor of 5 in accuracy, and we have successfully coupled our simulation tools with the CCG (Constrained Conjugate Gradient) optimization algorithm, allowing reconstructions that include spectral effects and blurring in the reconstructions. Another result of the project was the assembly of a low-scatter X-Ray imaging facility for use in nondestructive evaluation applications. We conclude with a discussion of future work.

  12. Parallel algorithm strategies for circuit simulation.

    SciTech Connect

    Thornquist, Heidi K.; Schiek, Richard Louis; Keiter, Eric Richard

    2010-01-01

    Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. However, they have been pushed to their performance limits in addressing circuit design challenges that come from the technology drivers of smaller feature scales and higher integration. Improving the performance of circuit simulation tools through exploiting new opportunities in widely-available multi-processor architectures is a logical next step. Unfortunately, not all traditional simulation applications are inherently parallel, and quickly adapting mature application codes (even codes designed to parallel applications) to new parallel paradigms can be prohibitively difficult. In general, performance is influenced by many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, the use of mini-applications small self-contained proxies for real applications is an excellent approach for rapidly exploring the parameter space of all these choices. In this report we present a multi-core performance study of Xyce, a transistor-level circuit simulation tool, and describe the future development of a mini-application for circuit simulation.

  13. Efficient algorithms for wildland fire simulation

    NASA Astrophysics Data System (ADS)

    Kondratenko, Volodymyr Y.

    In this dissertation, we develop the multiple-source shortest path algorithms and examine their application importance in real world problems, such as wildfire modeling. The theoretical basis and its implementation in the Weather Research Forecasting (WRF) model coupled with the fire spread code SFIRE (WRF-SFIRE model) are described. We present a data assimilation method that gives the fire spread model the ability to start the fire simulation from an observed fire perimeter instead of an ignition point. While the model is running, the fire state in the model changes in accordance with the new arriving data by data assimilation. As the fire state changes, the atmospheric state (which is strongly effected by heat flux) does not stay consistent with the fire state. The main difficulty of this methodology occurs in coupled fire-atmosphere models, because once the fire state is modified to match a given starting perimeter, the atmospheric circulation is no longer in sync with it. One of the possible solutions to this problem is a formation of the artificial time of ignition history from an earlier fire state, which is later used to replay the fire progression to the new perimeter with the proper heat fluxes fed into the atmosphere, so that the fire induced circulation is established. In this work, we develop efficient algorithms that start from the fire arrival times given at the set of points (called a perimeter) and create the artificial fire time of ignition and fire spread rate history. Different algorithms were developed in order to suit possible demands of the user, such as implementation in parallel programming, minimization of the required amount of iterations and memory use, and use of the rate of spread as a time dependent variable. For the algorithms that deal with the homogeneous rate of spread, it was proven that the values of fire arrival times they produce are optimal. It was also shown that starting from arbitrary initial state the algorithms have

  14. Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"

    SciTech Connect

    Petzold, Linda R.

    2012-10-25

    Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.

  15. An advanced dispatch simulator with advanced dispatch algorithm

    SciTech Connect

    Kafka, R.J. ); Fink, L.H. ); Balu, N.J. ); Crim, H.G. )

    1989-01-01

    This paper reports on an interactive automatic generation control (AGC) simulator. Improved and timely information regarding fossil fired plant performance is potentially useful in the economic dispatch of system generating units. Commonly used economic dispatch algorithms are not able to take full advantage of this information. The dispatch simulator was developed to test and compare economic dispatch algorithms which might be able to show improvement over standard economic dispatch algorithms if accurate unit information were available. This dispatch simulator offers substantial improvements over previously available simulators. In addition, it contains an advanced dispatch algorithm which shows control and performance advantages over traditional dispatch algorithms for both plants and electric systems.

  16. Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.

    1997-01-01

    The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.

  17. Parallel conjugate gradient algorithms for manipulator dynamic simulation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheld, Robert E.

    1989-01-01

    Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).

  18. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.

    PubMed

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm. PMID:24697395

  19. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    SciTech Connect

    Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  20. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  1. Efficient algorithms for distributed simulation and related problems

    SciTech Connect

    Kumar, D.

    1987-01-01

    This thesis presents efficient algorithms for distributed simulation, and for the related problems of termination detection and sequential simulation. Distributed simulation algorithms applicable to the simulation of special classes of systems, such that almost no overhead messages are required are presented. By contrast, previous distributed simulation algorithms, although applicable to the general class of any discrete-event system, usually require too many overhead messages. First, a simple distributed simulation algorithm is defined with nearly zero overhead messages for simulating feedforward systems. An approximate method is developed to predict its performance in simulating a class of feedforward-queuing networks. Performance of the scheme is evaluated in simulating specific subclasses of these queuing networks. It is shown that the scheme offers a high performance for serial-parallel networks. Next, another distributed simulation scheme is defined for a class of distributed systems whose topologies may have cycles. One important problem in devising distributed simulation algorithms is that of efficient detection of termination. With this in mind, a class of termination-detection algorithms using markers is devised. Finally, a new sequential simulation algorithm is developed, based on a distributed one. This algorithm often reduces the event-list manipulations of traditional-event list-driven simulation.

  2. A spectral unaveraged algorithm for free electron laser simulations

    SciTech Connect

    Andriyash, I.A.; Lehe, R.; Malka, V.

    2015-02-01

    We propose and discuss a numerical method to model electromagnetic emission from the oscillating relativistic charged particles and its coherent amplification. The developed technique is well suited for free electron laser simulations, but it may also be useful for a wider range of physical problems involving resonant field–particles interactions. The algorithm integrates the unaveraged coupled equations for the particles and the electromagnetic fields in a discrete spectral domain. Using this algorithm, it is possible to perform full three-dimensional or axisymmetric simulations of short-wavelength amplification. In this paper we describe the method, its implementation, and we present examples of free electron laser simulations comparing the results with the ones provided by commonly known free electron laser codes.

  3. A parallel algorithm for implicit depletant simulations

    NASA Astrophysics Data System (ADS)

    Glaser, Jens; Karas, Andrew S.; Glotzer, Sharon C.

    2015-11-01

    We present an algorithm to simulate the many-body depletion interaction between anisotropic colloids in an implicit way, integrating out the degrees of freedom of the depletants, which we treat as an ideal gas. Because the depletant particles are statistically independent and the depletion interaction is short-ranged, depletants are randomly inserted in parallel into the excluded volume surrounding a single translated and/or rotated colloid. A configurational bias scheme is used to enhance the acceptance rate. The method is validated and benchmarked both on multi-core processors and graphics processing units for the case of hard spheres, hemispheres, and discoids. With depletants, we report novel cluster phases in which hemispheres first assemble into spheres, which then form ordered hcp/fcc lattices. The method is significantly faster than any method without cluster moves and that tracks depletants explicitly, for systems of colloid packing fraction ϕc < 0.50, and additionally enables simulation of the fluid-solid transition.

  4. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  5. Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.

    2012-01-01

    Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.

  6. The Aquarius Salinity Retrieval Algorithm: Early Results

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Lagerloef, Gary; LeVine, David

    2012-01-01

    The Aquarius L-band radiometer/scatterometer system is designed to provide monthly salinity maps at 150 km spatial scale to a 0.2 psu accuracy. The sensor was launched on June 10, 2011, aboard the Argentine CONAE SAC-D spacecraft. The L-band radiometers and the scatterometer have been taking science data observations since August 25, 2011. The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to O2, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind. This is based on the radar backscatter measurements by the scatterometer. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water and an auxiliary field for the sea surface temperature. In the current processing (as of writing this abstract) only v-pol TB are used for this last process and NCEP winds are used for the roughness correction. Before the salinity algorithm can be operationally implemented and its accuracy assessed by comparing versus in situ measurements, an extensive calibration and validation

  7. Large Eddy Simulations using Lattice Boltzmann algorithms. Final report

    SciTech Connect

    Serling, J.D.

    1993-09-28

    This report contains the results of a study performed to implement eddy-viscosity models for Large-Eddy-Simulations (LES) into Lattice Boltzmann (LB) algorithms for simulating fluid flows. This implementation requires modification of the LB method of simulating the incompressible Navier-Stokes equations to allow simulation of the filtered Navier-Stokes equations with some subgrid model for the Reynolds stress term. We demonstrate that the LB method can indeed be used for LES by simply locally adjusting the value of the BGK relaxation time to obtain the desired eddy-viscosity. Thus, many forms of eddy-viscosity models including the standard Smagorinsky model or the Dynamic model may be implemented using LB algorithms. Since underresolved LB simulations often lead to instability, the LES model actually serves to stabilize the method. An alternative method of ensuring stability is presented which requires that entropy increase during the collision step of the LB method. Thus, an alternative collision operator is locally applied if the entropy becomes too low. This stable LB method then acts as an LES scheme that effectively introduces its own eddy viscosity to damp short wavelength oscillations.

  8. Daylighting simulation: methods, algorithms, and resources

    SciTech Connect

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of

  9. Atmospheric channel for bistatic optical communication: simulation algorithms

    NASA Astrophysics Data System (ADS)

    Belov, V. V.; Tarasenkov, M. V.

    2015-11-01

    Three algorithms of statistical simulation of the impulse response (IR) for the atmospheric optical communication channel are considered, including algorithms of local estimate and double local estimate and the algorithm suggested by us. On the example of a homogeneous molecular atmosphere it is demonstrated that algorithms of double local estimate and the suggested algorithm are more efficient than the algorithm of local estimate. For small optical path length, the proposed algorithm is more efficient, and for large optical path length, the algorithm of double local estimate is more efficient. Using the proposed algorithm, the communication quality is estimated for a particular case of the atmospheric channel under conditions of intermediate turbidity. The communication quality is characterized by the maximum IR, time of maximum IR, integral IR, and bandwidth of the communication channel. Calculations of these criteria demonstrated that communication is most efficient when the point of intersection of the directions toward the source and the receiver is most close to the source point.

  10. Motion Cueing Algorithm Modification for Improved Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Zaychik, Kirill; Kelly, Lon C.; Houck, Jacob

    2009-01-01

    Atmospheric turbulence cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. Cardullo and Ellor initially addressed this problem by directly porting the turbulence model output to the motion system. Reid and Robinson addressed the problem by employing a parallel aircraft model, which is only stimulated by the turbulence inputs and adding a filter specially designed to pass the higher turbulence frequencies. There have been advances in motion cueing algorithm development at the Man-Machine Systems Laboratory, at SUNY Binghamton. In particular, the system used to generate turbulence cues has been studied. The Reid approach, implemented by Telban and Cardullo, was employed to augment the optimal motion cueing algorithm installed at the NASA LaRC Simulation Laboratory, driving the Visual Motion Simulator. In this implementation, the output of the primary flight channel was added to the output of the turbulence channel and then sent through a non-linear cueing filter. The cueing filter is an adaptive filter; therefore, it is not desirable for the output of the turbulence channel to be augmented by this type of filter. The likelihood of the signal becoming divergent was also an issue in this design. After testing on-site it became apparent that the architecture of the turbulence algorithm was generating unacceptable cues. As mentioned above, this cueing algorithm comprised a filter that was designed to operate at low bandwidth. Therefore, the turbulence was also filtered, augmenting the cues generated by the model. If any filtering is to be done to the turbulence, it will utilize a filter with a much higher bandwidth, above the frequencies produced by the aircraft response to turbulence. The authors have developed an implementation wherein only the signal from the primary flight channel passes through the nonlinear cueing filter. This paper discusses three

  11. A New Simulation Algorithm Combining Fluid and Kinetic Properties

    NASA Astrophysics Data System (ADS)

    Larson, David; Hewett, Dennis

    2007-11-01

    Complex Particle Kinetics (CPK) [1,2] uses particles with internal degrees of freedom in an effort to simulate the transition between continuum and kinetic dynamics. Recent work [3] has provided a new path towards extending the adaptive particle capabilities of CPK. The resulting algorithm bridges the gap between fluid and kinetic regimes. The method uses an ensemble of macro-particles with a Gaussian spatial profile and a Mawellian velocity distribution to represent particle distributions in phase space. In addition to the standard PIC quantities of location, drift velocity, mass, and charge, the macro-particles also carry width, thermal velocity, and an internal velocity. The particle shape, internal velocity, and drift velocity respond to internal and eternal forces. The particles can contract, expand, rotate, and pass through one another. The algorithm allows arbitrary collisionality and functions effectively in the collision-dominated limit. We will present details of the algorithm as well as the results from several simulations. [1] D. W. Hewett, J. Comp. Phys. 189 (2003). [2] D. J. Larson, J. Comp. Phys. 188 (2003). [3] C. Gauger, et.al., SIAM J. Numer. Anal. 37 (2000).

  12. Parametric Quantum Search Algorithm as Quantum Walk: A Quantum Simulation

    NASA Astrophysics Data System (ADS)

    Ellinas, Demosthenes; Konstandakis, Christos

    2016-02-01

    Parametric quantum search algorithm (PQSA) is a form of quantum search that results by relaxing the unitarity of the original algorithm. PQSA can naturally be cast in the form of quantum walk, by means of the formalism of oracle algebra. This is due to the fact that the completely positive trace preserving search map used by PQSA, admits a unitarization (unitary dilation) a la quantum walk, at the expense of introducing auxiliary quantum coin-qubit space. The ensuing QW describes a process of spiral motion, chosen to be driven by two unitary Kraus generators, generating planar rotations of Bloch vector around an axis. The quadratic acceleration of quantum search translates into an equivalent quadratic saving of the number of coin qubits in the QW analogue. The associated to QW model Hamiltonian operator is obtained and is shown to represent a multi-particle long-range interacting quantum system that simulates parametric search. Finally, the relation of PQSA-QW simulator to the QW search algorithm is elucidated.

  13. Direct simulation Monte Carlo method with a focal mechanism algorithm

    NASA Astrophysics Data System (ADS)

    Rachman, Asep Nur; Chung, Tae Woong; Yoshimoto, Kazuo; Yun, Sukyoung

    2015-01-01

    To simulate the observation of the radiation pattern of an earthquake, the direct simulation Monte Carlo (DSMC) method is modified by implanting a focal mechanism algorithm. We compare the results of the modified DSMC method (DSMC-2) with those of the original DSMC method (DSMC-1). DSMC-2 shows more or similarly reliable results compared to those of DSMC-1, for events with 12 or more recorded stations, by weighting twice for hypocentral distance of less than 80 km. Not only the number of stations, but also other factors such as rough topography, magnitude of event, and the analysis method influence the reliability of DSMC-2. The most reliable result by DSMC-2 is obtained by the best azimuthal coverage by the largest number of stations. The DSMC-2 method requires shorter time steps and a larger number of particles than those of DSMC-1 to capture a sufficient number of arrived particles in the small-sized receiver.

  14. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  15. Simulating mesoscopic reaction-diffusion systems using the Gillespie algorithm

    SciTech Connect

    Bernstein, David

    2004-12-12

    We examine an application of the Gillespie algorithm to simulating spatially inhomogeneous reaction-diffusion systems in mesoscopic volumes such as cells and microchambers. The method involves discretizing the chamber into elements and modeling the diffusion of chemical species by the movement of molecules between neighboring elements. These transitions are expressed in the form of a set of reactions which are added to the chemical system. The derivation of the rates of these diffusion reactions is by comparison with a finite volume discretization of the heat equation on an unevenly spaced grid. The diffusion coefficient of each species is allowed to be inhomogeneous in space, including discontinuities. The resulting system is solved by the Gillespie algorithm using the fast direct method. We show that in an appropriate limit the method reproduces exact solutions of the heat equation for a purely diffusive system and the nonlinear reaction-rate equation describing the cubic autocatalytic reaction.

  16. Simulating mesoscopic reaction-diffusion systems using the Gillespie algorithm.

    PubMed

    Bernstein, David

    2005-04-01

    We examine an application of the Gillespie algorithm to simulating spatially inhomogeneous reaction-diffusion systems in mesoscopic volumes such as cells and microchambers. The method involves discretizing the chamber into elements and modeling the diffusion of chemical species by the movement of molecules between neighboring elements. These transitions are expressed in the form of a set of reactions which are added to the chemical system. The derivation of the rates of these diffusion reactions is by comparison with a finite volume discretization of the heat equation on an unevenly spaced grid. The diffusion coefficient of each species is allowed to be inhomogeneous in space, including discontinuities. The resulting system is solved by the Gillespie algorithm using the fast direct method. We show that in an appropriate limit the method reproduces exact solutions of the heat equation for a purely diffusive system and the nonlinear reaction-rate equation describing the cubic autocatalytic reaction. PMID:15903653

  17. Fast Particle Pair Detection Algorithms for Particle Simulations

    NASA Astrophysics Data System (ADS)

    Iwai, T.; Hong, C.-W.; Greil, P.

    New algorithms with O(N) complexity have been developed for fast particle-pair detections in particle simulations like the discrete element method (DEM) and molecular dynamic (MD). They exhibit robustness against broad particle size distributions when compared with conventional boxing methods. Almost similar calculation speeds are achieved at particle size distributions from is mono-size to 1:10 while the linked-cell method results in calculations more than 20 times. The basic algorithm, level-boxing, uses the variable search range according to each particle. The advanced method, multi-level boxing, employs multiple cell layers to reduce the particle size discrepancy. Another method, indexed-level boxing, reduces the size of cell arrays by introducing the hash procedure to access the cell array, and is effective for sparse particle systems with a large number of particles.

  18. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-09-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  19. Adaptive mesh and algorithm refinement using direct simulation Monte Carlo

    SciTech Connect

    Garcia, A.L.; Bell, J.B.; Crutchfield, W.Y.; Alder, B.J.

    1999-09-01

    Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.

  20. Duality quantum algorithm efficiently simulates open quantum systems

    PubMed Central

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  1. Duality quantum algorithm efficiently simulates open quantum systems

    NASA Astrophysics Data System (ADS)

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-07-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm.

  2. Duality quantum algorithm efficiently simulates open quantum systems.

    PubMed

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d(3)) in contrast to O(d(4)) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  3. Architecture and algorithm of a circuit simulator

    NASA Astrophysics Data System (ADS)

    Marranghello, Norian; Damiani, Furio

    1990-11-01

    Software-based circuit simulators had a ten-fold speed improvement in the last 15 years. Despite this they are not fast enough to cost- effectively deal with current VLSI circuits. In this paper we describe the current status of the ABACUS circuit simulator project, which takes advantage of both a dedicated hardware to speed up circuit simulation and a new methodology, where each parallel processor behaves like a circuit element.

  4. A fast recursive algorithm for molecular dynamics simulation

    NASA Technical Reports Server (NTRS)

    Jain, A.; Vaidehi, N.; Rodriguez, G.

    1993-01-01

    The present recursive algorithm for solving molecular systems' dynamical equations of motion employs internal variable models that reduce such simulations' computation time by an order of magnitude, relative to Cartesian models. Extensive use is made of spatial operator methods recently developed for analysis and simulation of the dynamics of multibody systems. A factor-of-450 speedup over the conventional O(N-cubed) algorithm is demonstrated for the case of a polypeptide molecule with 400 residues.

  5. Radar simulation program upgrade and algorithm development

    NASA Technical Reports Server (NTRS)

    Britt, Charles L.

    1991-01-01

    The NASA Radar Simulation Program is a comprehensive calculation of the expected output of an airborne coherent pulse Doppler radar system viewing a low level microburst along or near the approach path. Inputs to the program include the radar system parameters and data files that contain the characteristics of the microbursts to be simulated, the ground clutter map, and the discrete target data base which provides a simulation of the moving ground clutter. For each range bin, the simulation calculates the received signal amplitude level by integrating the product of the antenna gain pattern and the scattering source amplitude and phase of a spherical shell volume segment defined by the pulse width, radar range, and ground plane intersection. A series of in-phase and quadrature pulses are generated and stored for further processing if desired. In addition, various signal processing techniques are used to derive the simulated velocity and hazard measurements, and store them for use in plotting and display programs.

  6. An assessment of 'shuffle algorithm' collision mechanics for particle simulations

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Boyd, Iain D.

    1991-01-01

    Among the algorithms for collision mechanics used at present, the 'shuffle algorithm' of Baganoff (McDonald and Baganoff, 1988; Baganoff and McDonald, 1990) not only allows efficient vectorization, but also discretizes the possible outcomes of a collision. To assess the applicability of the shuffle algorithm, a simulation was performed of flows in monoatomic gases and the calculated characteristics of shock waves was compared with those obtained using a commonly employed isotropic scattering law. It is shown that, in general, the shuffle algorithm adequately represents the collision mechanics in cases when the goal of calculations are mean profiles of density and temperature.

  7. Fully explicit algorithms for fluid simulation

    NASA Astrophysics Data System (ADS)

    Clausen, Jonathan

    2011-11-01

    Computing hardware is trending towards distributed, massively parallel architectures in order to achieve high computational throughput. For example, Intrepid at Argonne uses 163,840 cores, and next generation machines, such as Sequoia at Lawrence Livermore, will use over one million cores. Harnessing the increasingly parallel nature of computational resources will require algorithms that scale efficiently on these architectures. The advent of GPU-based computation will serve to accelerate this behavior, as a single GPU contains hundreds of processor ``cores.'' Explicit algorithms avoid the communication associated with a linear solve, thus parallel scalability of these algorithms is typically high. This work will explore the efficiency and accuracy of three explicit solution methodologies for the Navier-Stokes equations: traditional artificial compressibility schemes, the lattice-Boltzmann method, and the recently proposed kinetically reduced local Navier-Stokes equations [Borok, Ansumali, and Karlin (2007)]. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Application of integration algorithms in a parallel processing environment for the simulation of jet engines

    NASA Technical Reports Server (NTRS)

    Krosel, S. M.; Milner, E. J.

    1982-01-01

    The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.

  9. Enhanced quasi-static PIC simulation with pipelining algorithm for e-cloud instability

    NASA Astrophysics Data System (ADS)

    Feng, Bing; Huang, Chengkun; Decyk, Viktor; Mori, Warren; Muggli, Patric; Katsouleas, Tom

    2008-11-01

    Simulating the electron cloud effect on a beam that circulates thousands of turns in circular machines is highly computationally demanding. A novel algorithm, the pipelining algorithm is applied to the fully parallelized quasi-static particle-in-cell code QuickPIC to overcome the limit of the maximum number of processors can be used for each time step. The pipelining algorithm divides the processors into subgroups and each subgroup focuses on different partition of the beam and performs the calculation in series. With this novel algorithm, the accuracy of the simulation is preserved; the speed of the simulation is improved by one order of magnitude with more than 10^2 processors are used. The long term simulation results of the CERN-LHC and the Main Injector at FNAL from the QuickPIC with pipelining algorithm are presented. This work is supported by SiDAC and US Department of Energy

  10. An algorithm to build mock galaxy catalogues using MICE simulations

    NASA Astrophysics Data System (ADS)

    Carretero, J.; Castander, F. J.; Gaztañaga, E.; Crocce, M.; Fosalba, P.

    2015-02-01

    We present a method to build mock galaxy catalogues starting from a halo catalogue that uses halo occupation distribution (HOD) recipes as well as the subhalo abundance matching (SHAM) technique. Combining both prescriptions we are able to push the absolute magnitude of the resulting catalogue to fainter luminosities than using just the SHAM technique and can interpret our results in terms of the HOD modelling. We optimize the method by populating with galaxies friends-of-friends dark matter haloes extracted from the Marenostrum Institut de Ciències de l'Espai dark matter simulations and comparing them to observational constraints. Our resulting mock galaxy catalogues manage to reproduce the observed local galaxy luminosity function and the colour-magnitude distribution as observed by the Sloan Digital Sky Survey. They also reproduce the observed galaxy clustering properties as a function of luminosity and colour. In order to achieve that, the algorithm also includes scatter in the halo mass-galaxy luminosity relation derived from direct SHAM and a modified Navarro-Frenk-White mass density profile to place satellite galaxies in their host dark matter haloes. Improving on general usage of the HOD that fits the clustering for given magnitude limited samples, our catalogues are constructed to fit observations at all luminosities considered and therefore for any luminosity subsample. Overall, our algorithm is an economic procedure of obtaining galaxy mock catalogues down to faint magnitudes that are necessary to understand and interpret galaxy surveys.

  11. Model predictive driving simulator motion cueing algorithm with actuator-based constraints

    NASA Astrophysics Data System (ADS)

    Garrett, Nikhil J. I.; Best, Matthew C.

    2013-08-01

    The simulator motion cueing problem has been considered extensively in the literature; approaches based on linear filtering and optimal control have been presented and shown to perform reasonably well. More recently, model predictive control (MPC) has been considered as a variant of the optimal control approach; MPC is perhaps an obvious candidate for motion cueing due to its ability to deal with constraints, in this case the platform workspace boundary. This paper presents an MPC-based cueing algorithm that, unlike other algorithms, uses the actuator positions and velocities as the constraints. The result is a cueing algorithm that can make better use of the platform workspace whilst ensuring that its bounds are never exceeded. The algorithm is shown to perform well against the classical cueing algorithm and an algorithm previously proposed by the authors, both in simulation and in tests with human drivers.

  12. SARDA HITL Simulations: System Performance Results

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam

    2012-01-01

    This presentation gives an overview of the 2012 SARDA human-in-the-loop simulation, and presents a summary of system performance results from the simulation, including delay, throughput and fuel consumption

  13. Algorithm design for a gun simulator based on image processing

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Wei, Ping; Ke, Jun

    2015-08-01

    In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.

  14. Correction and simulation of the intensity compensation algorithm used in curvature wavefront sensing

    NASA Astrophysics Data System (ADS)

    Wu, Zhi-Xu; Bai, Hua; Cui, Xiang-Qun

    2015-05-01

    The wavefront measuring range and recovery precision of a curvature sensor can be improved by an intensity compensation algorithm. However, in a focal system with a fast f-number, especially a telescope with a large field of view, the accuracy of this algorithm cannot meet the requirements. A theoretical analysis of the corrected intensity compensation algorithm in a focal system with a fast f-number is first introduced and afterwards the mathematical equations used in this algorithm are expressed. The corrected result is then verified through simulation. The method used by such a simulation can be described as follows. First, the curvature signal from a focal system with a fast f-number is simulated by Monte Carlo ray tracing; then the wavefront result is calculated by the inner loop of the FFT wavefront recovery algorithm and the outer loop of the intensity compensation algorithm. Upon comparing the intensity compensation algorithm of an ideal system with the corrected intensity compensation algorithm, we reveal that the recovered precision of the curvature sensor can be greatly improved by the corrected intensity compensation algorithm. Supported by the National Natural Science Foundation of China.

  15. Improved delay-leaping simulation algorithm for biochemical reaction systems with delays

    NASA Astrophysics Data System (ADS)

    Yi, Na; Zhuang, Gang; Da, Liang; Wang, Yifei

    2012-04-01

    In biochemical reaction systems dominated by delays, the simulation speed of the stochastic simulation algorithm depends on the size of the wait queue. As a result, it is important to control the size of the wait queue to improve the efficiency of the simulation. An improved accelerated delay stochastic simulation algorithm for biochemical reaction systems with delays, termed the improved delay-leaping algorithm, is proposed in this paper. The update method for the wait queue is effective in reducing the size of the queue as well as shortening the storage and access time, thereby accelerating the simulation speed. Numerical simulation on two examples indicates that this method not only obtains a more significant efficiency compared with the existing methods, but also can be widely applied in biochemical reaction systems with delays.

  16. An Optimization Algorithm for Multipath Parallel Allocation for Service Resource in the Simulation Task Workflow

    PubMed Central

    Zhang, Hongjun; Zhang, Rui; Li, Yong; Zhang, Xuliang

    2014-01-01

    Service oriented modeling and simulation are hot issues in the field of modeling and simulation, and there is need to call service resources when simulation task workflow is running. How to optimize the service resource allocation to ensure that the task is complete effectively is an important issue in this area. In military modeling and simulation field, it is important to improve the probability of success and timeliness in simulation task workflow. Therefore, this paper proposes an optimization algorithm for multipath service resource parallel allocation, in which multipath service resource parallel allocation model is built and multiple chains coding scheme quantum optimization algorithm is used for optimization and solution. The multiple chains coding scheme quantum optimization algorithm is to extend parallel search space to improve search efficiency. Through the simulation experiment, this paper investigates the effect for the probability of success in simulation task workflow from different optimization algorithm, service allocation strategy, and path number, and the simulation result shows that the optimization algorithm for multipath service resource parallel allocation is an effective method to improve the probability of success and timeliness in simulation task workflow. PMID:24963506

  17. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  18. Stochastic simulation for imaging spatial uncertainty: Comparison and evaluation of available algorithms

    SciTech Connect

    Gotway, C.A.; Rutherford, B.M.

    1993-09-01

    Stochastic simulation has been suggested as a viable method for characterizing the uncertainty associated with the prediction of a nonlinear function of a spatially-varying parameter. Geostatistical simulation algorithms generate realizations of a random field with specified statistical and geostatistical properties. A nonlinear function is evaluated over each realization to obtain an uncertainty distribution of a system response that reflects the spatial variability and uncertainty in the parameter. Crucial management decisions, such as potential regulatory compliance of proposed nuclear waste facilities and optimal allocation of resources in environmental remediation, are based on the resulting system response uncertainty distribution. Many geostatistical simulation algorithms have been developed to generate the random fields, and each algorithm will produce fields with different statistical properties. These different properties will result in different distributions for system response, and potentially, different managerial decisions. The statistical properties of the resulting system response distributions are not completely understood, nor is the ability of the various algorithms to generate response distributions that adequately reflect the associated uncertainty. This paper reviews several of the algorithms available for generating random fields. Algorithms are compared in a designed experiment using seven exhaustive data sets with different statistical and geostatistical properties. For each exhaustive data set, a number of realizations are generated using each simulation algorithm. The realizations are used with each of several deterministic transfer functions to produce a cumulative uncertainty distribution function of a system response. The uncertainty distributions are then compared to the single value obtained from the corresponding exhaustive data set.

  19. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    PubMed Central

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  20. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.

    PubMed

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  1. Energy conserving continuum algorithms for kinetic & gyrokinetic simulations of plasmas

    NASA Astrophysics Data System (ADS)

    Hakim, A.; Hammett, G. W.; Shi, E.; Stoltzfus-Dueck, T.

    2015-11-01

    We present high-order, energy conserving, continuum algorithms for the solution of gyrokinetic equations for use in edge turbulence simulations. The distribution function is evolved with a discontinuous Galerkin scheme, while the fields are evolved with a continuous finite-element method. These algorithms work for a general, possibly non-canonical, Poisson bracket operator and conserve energy exactly. Benchmark simulations with ETG turbulence in 3X/2V are shown, as well as initial applications of the algorithms to turbulence in a simplified SOL geometry. Sheath boundary conditions with recycling and secondary electron emission are implemented, and a Lenard-Bernstein collision operator is included. Extension of these algorithms to full Vlasov-Maxwell equations are presented. It is shown that with a particular choice of numerical fluxes the total (particle+field) energy is conserved. Algorithms are implemented in a flexible and open-source framework, Gkeyll, which also includes fluid models, allowing potential hybrid simulations of various plasma problems. Supported by the Max-Planck/Princeton Center for Plasma Physics, and DOE Contract DE-AC02-09CH11466.

  2. Stochastic search in structural optimization - Genetic algorithms and simulated annealing

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1993-01-01

    An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.

  3. Analysis of a simulation algorithm for direct brain drug delivery

    PubMed Central

    Rosenbluth, Kathryn Hammond; Eschermann, Jan Felix; Mittermeyer, Gabriele; Thomson, Rowena; Mittermeyer, Stephan; Bankiewicz, Krystof S.

    2011-01-01

    Convection enhanced delivery (CED) achieves targeted delivery of drugs with a pressure-driven infusion through a cannula placed stereotactically in the brain. This technique bypasses the blood brain barrier and gives precise distributions of drugs, minimizing off-target effects of compounds such as viral vectors for gene therapy or toxic chemotherapy agents. The exact distribution is affected by the cannula positioning, flow rate and underlying tissue structure. This study presents an analysis of a simulation algorithm for predicting the distribution using baseline MRI images acquired prior to inserting the cannula. The MRI images included diffusion tensor imaging (DTI) to estimate the tissue properties. The algorithm was adapted for the devices and protocols identified for upcoming trials and validated with direct MRI visualization of Gadolinium in 20 infusions in non-human primates. We found strong agreement between the size and location of the simulated and gadolinium volumes, demonstrating the clinical utility of this surgical planning algorithm. PMID:21945468

  4. Evaluation of registration, compression and classification algorithms. Volume 1: Results

    NASA Technical Reports Server (NTRS)

    Jayroe, R.; Atkinson, R.; Callas, L.; Hodges, J.; Gaggini, B.; Peterson, J.

    1979-01-01

    The registration, compression, and classification algorithms were selected on the basis that such a group would include most of the different and commonly used approaches. The results of the investigation indicate clearcut, cost effective choices for registering, compressing, and classifying multispectral imagery.

  5. Evolutionary algorithms, simulated annealing, and Tabu search: a comparative study

    NASA Astrophysics Data System (ADS)

    Youssef, Habib; Sait, Sadiq M.; Adiche, Hakim

    1998-10-01

    Evolutionary algorithms, simulated annealing (SA), and Tabu Search (TS) are general iterative algorithms for combinatorial optimization. The term evolutionary algorithm is used to refer to any probabilistic algorithm whose design is inspired by evolutionary mechanisms found in biological species. Most widely known algorithms of this category are Genetic Algorithms (GA). GA, SA, and TS have been found to be very effective and robust in solving numerous problems from a wide range of application domains.Furthermore, they are even suitable for ill-posed problems where some of the parameters are not known before hand. These properties are lacking in all traditional optimization techniques. In this paper we perform a comparative study among GA, SA, and TS. These algorithms have many similarities, but they also possess distinctive features, mainly in their strategies for searching the solution state space. the three heuristics are applied on the same optimization problem and compared with respect to (1) quality of the best solution identified by each heuristic, (2) progress of the search from initial solution(s) until stopping criteria are met, (3) the progress of the cost of the best solution as a function of time, and (4) the number of solutions found at successive intervals of the cost function. The benchmark problem was is the floorplanning of very large scale integrated circuits. This is a hard multi-criteria optimization problem. Fuzzy logic is used to combine all objective criteria into a single fuzzy evaluation function, which is then used to rate competing solutions.

  6. Recycling random numbers in the stochastic simulation algorithm.

    PubMed

    Yates, Christian A; Klingbeil, Guido

    2013-03-01

    The stochastic simulation algorithm (SSA) was introduced by Gillespie and in a different form by Kurtz. Since its original formulation there have been several attempts at improving the efficiency and hence the speed of the algorithm. We briefly discuss some of these methods before outlining our own simple improvement, the recycling direct method (RDM), and demonstrating that it is capable of increasing the speed of most stochastic simulations. The RDM involves the statistically acceptable recycling of random numbers in order to reduce the computational cost associated with their generation and is compatible with several of the pre-existing improvements on the original SSA. Our improvement is also sufficiently simple (one additional line of code) that we hope will be adopted by both trained mathematical modelers and experimentalists wishing to simulate their model systems. PMID:23485273

  7. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising

  8. The VIIRS ocean data simulator enhancements and results

    NASA Astrophysics Data System (ADS)

    Robinson, Wayne D.; Patt, Frederick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-10-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  9. The VIIRS Ocean Data Simulator Enhancements and Results

    NASA Technical Reports Server (NTRS)

    Robinson, Wayne D.; Patt, Fredrick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-01-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  10. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    NASA Astrophysics Data System (ADS)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input

  11. Milestone M4900: Simulant Mixing Analytical Results

    SciTech Connect

    Kaplan, D.I.

    2001-07-26

    This report addresses Milestone M4900, ''Simulant Mixing Sample Analysis Results,'' and contains the data generated during the ''Mixing of Process Heels, Process Solutions, and Recycle Streams: Small-Scale Simulant'' task. The Task Technical and Quality Assurance Plan for this task is BNF-003-98-0079A. A report with a narrative description and discussion of the data will be issued separately.

  12. DKIST Adaptive Optics System: Simulation Results

    NASA Astrophysics Data System (ADS)

    Marino, Jose; Schmidt, Dirk

    2016-05-01

    The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.

  13. SCEC Earthquake Simulator Comparison Results for California

    NASA Astrophysics Data System (ADS)

    Tullis, T. E.; Richards-Dinger, K. B.; Barall, M.; Dieterich, J. H.; Field, E. H.; Heien, E. M.; Kellogg, L. H.; Pollitz, F. F.; Rundle, J. B.; Sachs, M. K.; Turcotte, D. L.; Ward, S. N.; Zielke, O.

    2011-12-01

    This is our first report on comparisons of earthquake simulator results with one another and with actual earthquake data for all of California, excluding Cascadia. Earthquake simulators are computer programs that simulate long sequences of earthquakes and therefore allow study of a much longer earthquake history than is possible from instrumental, historical and paleoseismic data. The usefulness of simulated histories for anticipating the probabilities of future earthquakes and for contributing to public policy decisions depends on whether simulated earthquake catalogs properly represent actual earthquakes. Thus, we compare simulated histories generated by five different earthquake simulators with one another and with what is known about actual earthquake history in order to evaluate the usefulness of the simulator results. Although sharing common features, our simulators differ from one another in their details in many important ways. All simulators use the same fault geometry and the same ~15,000, 3x3 km elements to represent the strike-slip and thrust faults in California. The set of faults and the input slip rates on them are essentially those of the UCERF2 fault and deformation model; we will switch to the UCERF3 model once it is available. All simulators use the boundary element method to compute stress transfer between elements. Differences between the simulators include how they represent fault friction and what assumptions they make to promote rupture propagation from one element to another. The behavior of the simulators is encouragingly similar and the results are similar to what is known about real earthquakes, although some refinements are being made to some of the simulators to improve these comparisons as a result of our initial results. The frequency magnitude distributions of simulated events from M6 to M7.5 for a 30,000 year simulated history agree well with instrumental observations for all of California. Scaling relations, as seen on plots of

  14. A Contextual Fire Detection Algorithm for Simulated HJ-1B Imagery

    PubMed Central

    Qian, Yonggang; Yan, Guangjian; Duan, Sibo; Kong, Xiangsheng

    2009-01-01

    The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m2 and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m2, only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m2, the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection. PMID:22399950

  15. Coalescent simulation in continuous space: algorithms for large neighbourhood size.

    PubMed

    Kelleher, J; Etheridge, A M; Barton, N H

    2014-08-01

    Many species have an essentially continuous distribution in space, in which there are no natural divisions between randomly mating subpopulations. Yet, the standard approach to modelling these populations is to impose an arbitrary grid of demes, adjusting deme sizes and migration rates in an attempt to capture the important features of the population. Such indirect methods are required because of the failure of the classical models of isolation by distance, which have been shown to have major technical flaws. A recently introduced model of extinction and recolonisation in two dimensions solves these technical problems, and provides a rigorous technical foundation for the study of populations evolving in a spatial continuum. The coalescent process for this model is simply stated, but direct simulation is very inefficient for large neighbourhood sizes. We present efficient and exact algorithms to simulate this coalescent process for arbitrary sample sizes and numbers of loci, and analyse these algorithms in detail. PMID:24910324

  16. A simple algorithm for analyzing uncertainty of accident reconstruction results.

    PubMed

    Zou, Tiefang; Hu, Lin; Li, Pingfan; Wu, Hequan

    2015-12-01

    In order to analyzing the uncertainty in accident reconstruction, based on the theory of extreme value and the convex model theory, the uncertainty analysis problem is turn to an extreme value problem. In order to calculate the range of the dependent variable, the extreme value in the definition domain and on the boundary of the definition domain are calculated independently, and then the upper and lower bound of the dependent variable can be given by these obtained extreme values. Based on such idea and through analyzing five numerical cases, a simple algorithm for calculating the range of an accident reconstruction result was given; appropriate results can be obtained through the proposed algorithm in these cases. Finally, a real world vehicle-motorcycle accident was given, the range of the reconstructed velocity of the vehicle was calculated by employing the Pc-Crash, the response surface methodology and the new proposed algorithm, the range was [66.1-67.3] km/h. This research will provide another choice for uncertainty analysis in accident reconstruction. PMID:26386339

  17. SMMR Simulator radiative transfer calibration model. 2: Algorithm development

    NASA Technical Reports Server (NTRS)

    Link, S.; Calhoon, C.; Krupp, B.

    1980-01-01

    Passive microwave measurements performed from Earth orbit can be used to provide global data on a wide range of geophysical and meteorological phenomena. A Scanning Multichannel Microwave Radiometer (SMMR) is being flown on the Nimbus-G satellite. The SMMR Simulator duplicates the frequency bands utilized in the spacecraft instruments through an amalgamate of radiometer systems. The algorithm developed utilizes data from the fall 1978 NASA CV-990 Nimbus-G underflight test series and subsequent laboratory testing.

  18. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.

    PubMed

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-01

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems. PMID:22697525

  19. Sampling of general correlators in worm-algorithm based simulations

    NASA Astrophysics Data System (ADS)

    Rindlisbacher, Tobias; Åkerlund, Oscar; de Forcrand, Philippe

    2016-08-01

    Using the complex ϕ4-model as a prototype for a system which is simulated by a worm algorithm, we show that not only the charged correlator <ϕ* (x) ϕ (y) >, but also more general correlators such as < | ϕ (x) | | ϕ (y) | > or < arg ⁡ (ϕ (x)) arg ⁡ (ϕ (y)) >, as well as condensates like < | ϕ | >, can be measured at every step of the Monte Carlo evolution of the worm instead of on closed-worm configurations only. The method generalizes straightforwardly to other systems simulated by worms, such as spin or sigma models.

  20. An improved sink particle algorithm for SPH simulations

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Walch, S.; Whitworth, A. P.

    2013-04-01

    Numerical simulations of star formation frequently rely on the implementation of sink particles: (a) to avoid expending computational resource on the detailed internal physics of individual collapsing protostars, (b) to derive mass functions, binary statistics and clustering kinematics (and hence to make comparisons with observation), and (c) to model radiative and mechanical feedback; sink particles are also used in other contexts, for example to represent accreting black holes in galactic nuclei. We present a new algorithm for creating and evolving sink particles in smoothed particle hydrodynamic (SPH) simulations, which appears to represent a significant improvement over existing algorithms - particularly in situations where sinks are introduced after the gas has become optically thick to its own cooling radiation and started to heat up by adiabatic compression. (i) It avoids spurious creation of sinks. (ii) It regulates the accretion of matter on to a sink so as to mitigate non-physical perturbations in the vicinity of the sink. (iii) Sinks accrete matter, but the associated angular momentum is transferred back to the surrounding medium. With the new algorithm - and modulo the need to invoke sufficient resolution to capture the physics preceding sink formation - the properties of sinks formed in simulations are essentially independent of the user-defined parameters of sink creation, or the number of SPH particles used.

  1. The Effect of Pansharpening Algorithms on the Resulting Orthoimagery

    NASA Astrophysics Data System (ADS)

    Agrafiotis, P.; Georgopoulos, A.; Karantzalos, K.

    2016-06-01

    This paper evaluates the geometric effects of pansharpening algorithms on automatically generated DSMs and thus on the resulting orthoimagery through a quantitative assessment of the accuracy on the end products. The main motivation was based on the fact that for automatically generated Digital Surface Models, an image correlation step is employed for extracting correspondences between the overlapping images. Thus their accuracy and reliability is strictly related to image quality, while pansharpening may result into lower image quality which may affect the DSM generation and the resulting orthoimage accuracy. To this direction, an iterative methodology was applied in order to combine the process described by Agrafiotis and Georgopoulos (2015) with different pansharpening algorithms and check the accuracy of orthoimagery resulting from pansharpened data. Results are thoroughly examined and statistically analysed. The overall evaluation indicated that the pansharpening process didn't affect the geometric accuracy of the resulting DSM with a 10m interval, as well as the resulting orthoimagery. Although some residuals in the orthoimages were observed, their magnitude cannot adversely affect the accuracy of the final orthoimagery.

  2. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  3. Box length search algorithm for molecular simulation of systems containing periodic structures.

    PubMed

    Schultz, A J; Hall, C K; Genzer, J

    2004-01-22

    We have developed a box length search algorithm to efficiently find the appropriate box dimensions for constant-volume molecular simulation of periodic structures. The algorithm works by finding the box lengths that equalize the pressure in each direction while maintaining constant total volume. Maintaining the volume at a fixed value ensures that quantitative comparisons can be made between simulation and experimental, theoretical or other simulation results for systems that are incompressible or nearly incompressible. We test the algorithm on a system of phase-separated block copolymers that has a preferred box length in one dimension. We also describe and test a Monte Carlo algorithm that allows the box lengths to change while maintaining constant volume. We find that the box length search algorithm converges at least two orders of magnitude more quickly than the variable box length Monte Carlo method. Although the box length search algorithm is not ergodic, it successfully finds the box length that minimizes the free energy of the system. We verify this by examining the free energy as determined by the Monte Carlo simulation. PMID:15268341

  4. Massively parallel algorithms for trace-driven cache simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  5. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGESBeta

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  6. An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm

    SciTech Connect

    Donev, A; Garcia, A L; Alder, B J

    2007-07-30

    A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.

  7. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    NASA Technical Reports Server (NTRS)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  8. An Initial Examination for Verifying Separation Algorithms by Simulation

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Neogi, Natasha; Herencia-Zapana, Heber

    2012-01-01

    An open question in algorithms for aircraft is what can be validated by simulation where the simulation shows that the probability of undesirable events is below some given level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first proposes a goal based on the number of flights per year in several regions. The paper examines the probabilistic interpretation of this goal and computes the number of trials needed to establish it at an equivalent confidence level. Since any simulation is likely to consider the algorithms for only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. This paper is an initial effort, and as such, it considers separation maneuvers, which are elementary but include numerous aspects of aircraft behavior. The scenario includes decisions under uncertainty since the position of each aircraft is only known to the other by broadcasting where GPS believes each aircraft to be (ADS-B). Each aircraft operates under feedback control with perturbations. It is shown that a scenario three or four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.

  9. Cassini radar : system concept and simulation results

    NASA Astrophysics Data System (ADS)

    Melacci, P. T.; Orosei, R.; Picardi, G.; Seu, R.

    1998-10-01

    The Cassini mission is an international venture, involving NASA, the European Space Agency (ESA) and the Italian Space Agency (ASI), for the investigation of the Saturn system and, in particular, Titan. The Cassini radar will be able to see through Titan's thick, optically opaque atmosphere, allowing us to better understand the composition and the morphology of its surface, but the interpretation of the results, due to the complex interplay of many different factors determining the radar echo, will not be possible without an extensive modellization of the radar system functioning and of the surface reflectivity. In this paper, a simulator of the multimode Cassini radar will be described, after a brief review of our current knowledge of Titan and a discussion of the contribution of the Cassini radar in answering to currently open questions. Finally, the results of the simulator will be discussed. The simulator has been implemented on a RISC 6000 computer by considering only the active modes of operation, that is altimeter and synthetic aperture radar. In the instrument simulation, strict reference has been made to the present planned sequence of observations and to the radar settings, including burst and single pulse duration, pulse bandwidth, pulse repetition frequency and all other parameters which may be changed, and possibly optimized, according to the operative mode. The observed surfaces are simulated by a facet model, allowing the generation of surfaces with Gaussian or non-Gaussian roughness statistic, together with the possibility of assigning to the surface an average behaviour which can represent, for instance, a flat surface or a crater. The results of the simulation will be discussed, in order to check the analytical evaluations of the models of the average received echoes and of the attainable performances. In conclusion, the simulation results should allow the validation of the theoretical evaluations of the capabilities of microwave instruments, when

  10. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    PubMed

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  11. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  12. Self-adaptive genetic algorithms with simulated binary crossover.

    PubMed

    Deb, K; Beyer, H G

    2001-01-01

    Self-adaptation is an essential feature of natural evolution. However, in the context of function optimization, self-adaptation features of evolutionary search algorithms have been explored mainly with evolution strategy (ES) and evolutionary programming (EP). In this paper, we demonstrate the self-adaptive feature of real-parameter genetic algorithms (GAs) using a simulated binary crossover (SBX) operator and without any mutation operator. The connection between the working of self-adaptive ESs and real-parameter GAs with the SBX operator is also discussed. Thereafter, the self-adaptive behavior of real-parameter GAs is demonstrated on a number of test problems commonly used in the ES literature. The remarkable similarity in the working principle of real-parameter GAs and self-adaptive ESs shown in this study suggests the need for emphasizing further studies on self-adaptive GAs. PMID:11382356

  13. An algorithm for protein engineering: simulations of recursive ensemble mutagenesis.

    PubMed Central

    Arkin, A P; Youvan, D C

    1992-01-01

    An algorithm for protein engineering, termed recursive ensemble mutagenesis, has been developed to produce diverse populations of phenotypically related mutants whose members differ in amino acid sequence. This method uses a feedback mechanism to control successive rounds of combinatorial cassette mutagenesis. Starting from partially randomized "wild-type" DNA sequences, a highly parallel search of sequence space for peptides fitting an experimenter's criteria is performed. Each iteration uses information gained from the previous rounds to search the space more efficiently. Simulations of the technique indicate that, under a variety of conditions, the algorithm can rapidly produce a diverse population of proteins fitting specific criteria. In the experimental analog, genetic selection or screening applied during recursive ensemble mutagenesis should force the evolution of an ensemble of mutants to a targeted cluster of related phenotypes. Images PMID:1502200

  14. A conflict-free, path-level parallelization approach for sequential simulation algorithms

    NASA Astrophysics Data System (ADS)

    Rasera, Luiz Gustavo; Machado, Péricles Lopes; Costa, João Felipe C. L.

    2015-07-01

    Pixel-based simulation algorithms are the most widely used geostatistical technique for characterizing the spatial distribution of natural resources. However, sequential simulation does not scale well for stochastic simulation on very large grids, which are now commonly found in many petroleum, mining, and environmental studies. With the availability of multiple-processor computers, there is an opportunity to develop parallelization schemes for these algorithms to increase their performance and efficiency. Here we present a conflict-free, path-level parallelization strategy for sequential simulation. The method consists of partitioning the simulation grid into a set of groups of nodes and delegating all available processors for simulation of multiple groups of nodes concurrently. An automated classification procedure determines which groups are simulated in parallel according to their spatial arrangement in the simulation grid. The major advantage of this approach is that it does not require conflict resolution operations, and thus allows exact reproduction of results. Besides offering a large performance gain when compared to the traditional serial implementation, the method provides efficient use of computational resources and is generic enough to be adapted to several sequential algorithms.

  15. Concurrent Algorithm For Particle-In-Cell Simulations

    NASA Technical Reports Server (NTRS)

    Liewer, Paulett C.; Decyk, Viktor K.

    1990-01-01

    Separate decompositions used for particle-motion and field calculations. General Concurrent Particle-in-Cell (GCPIC) algorithm used to implement motions of individual plasma particles (ions and electrons) under influence of particle-in-cell (PIC) computer codes on concurrent processors. Simulates motions of individual plasma particles under influence of electromagnetic fields generated by particles themselves. Performed to study variety of nonlinear problems in plasma physics, including magnetic and inertial fusion, plasmas in outer space, propagation of electron and ion beams, free-electron lasers, and particle accelerators.

  16. Titan's organic chemistry: Results of simulation experiments

    NASA Technical Reports Server (NTRS)

    Sagan, Carl; Thompson, W. Reid; Khare, Bishun N.

    1992-01-01

    Recent low pressure continuous low plasma discharge simulations of the auroral electron driven organic chemistry in Titan's mesosphere are reviewed. These simulations yielded results in good accord with Voyager observations of gas phase organic species. Optical constants of the brownish solid tholins produced in similar experiments are in good accord with Voyager observations of the Titan haze. Titan tholins are rich in prebiotic organic constituents; the Huygens entry probe may shed light on some of the processes that led to the origin of life on Earth.

  17. A permutation based simulated annealing algorithm to predict pseudoknotted RNA secondary structures.

    PubMed

    Tsang, Herbert H; Wiese, Kay C

    2015-01-01

    Pseudoknots are RNA tertiary structures which perform essential biological functions. This paper discusses SARNA-Predict-pk, a RNA pseudoknotted secondary structure prediction algorithm based on Simulated Annealing (SA). The research presented here extends previous work of SARNA-Predict and further examines the effect of the new algorithm to include prediction of RNA secondary structure with pseudoknots. An evaluation of the performance of SARNA-Predict-pk in terms of prediction accuracy is made via comparison with several state-of-the-art prediction algorithms using 20 individual known structures from seven RNA classes. We measured the sensitivity and specificity of nine prediction algorithms. Three of these are dynamic programming algorithms: Pseudoknot (pknotsRE), NUPACK, and pknotsRG-mfe. One is using the statistical clustering approach: Sfold and the other five are heuristic algorithms: SARNA-Predict-pk, ILM, STAR, IPknot and HotKnots algorithms. The results presented in this paper demonstrate that SARNA-Predict-pk can out-perform other state-of-the-art algorithms in terms of prediction accuracy. This supports the use of the proposed method on pseudoknotted RNA secondary structure prediction of other known structures. PMID:26558299

  18. An optimization method of relativistic backward wave oscillator using particle simulation and genetic algorithms

    SciTech Connect

    Chen, Zaigao; Wang, Jianguo; Wang, Yue; Qiao, Hailiang; Zhang, Dianhui; Guo, Weijie

    2013-11-15

    Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.

  19. Constant-complexity stochastic simulation algorithm with optimal binning

    SciTech Connect

    Sanft, Kevin R.; Othmer, Hans G.

    2015-08-21

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  20. Upper cervical injuries: Clinical results using a new treatment algorithm

    PubMed Central

    Joaquim, Andrei F.; Ghizoni, Enrico; Tedeschi, Helder; Yacoub, Alexandre R. D.; Brodke, Darrel S.; Vaccaro, Alexander R.; Patel, Alpesh A.

    2015-01-01

    Introduction: Upper cervical injuries (UCI) have a wide range of radiological and clinical presentation due to the unique complex bony, ligamentous and vascular anatomy. We recently proposed a rational approach in an attempt to unify prior classification system and guide treatment. In this paper, we evaluate the clinical results of our algorithm for UCI treatment. Materials and Methods: A prospective cohort series of patients with UCI was performed. The primary outcome was the AIS. Surgical treatment was proposed based on our protocol: Ligamentous injuries (abnormal misalignment, facet perched or locked, increase atlanto-dens interval) were treated surgically. Bone fractures without ligamentous injuries were treated with a rigid cervical orthosis, with exception of fractures in the dens base with risk factors for non-union. Results: Twenty-three patients treated initially conservatively had some follow-up (mean of 171 days, range from 60 to 436 days). All of them were neurologically intact. None of the patients developed a new neurological deficit. Fifteen patients were initially surgically treated (mean of 140 days of follow-up, ranging from 60 to 270 days). In the surgical group, preoperatively, 11 (73.3%) patients were AIS E, 2 (13.3%) AIS C and 2 (13.3%) AIS D. At the final follow-up, the American Spine Injury Association (ASIA) score was: 13 (86.6%) AIS E and 2 (13.3%) AIS D. None of the patients had neurological worsening during the follow-up. Conclusions: This prospective cohort suggested that our UCI treatment algorithm can be safely used. Further prospective studies with longer follow-up are necessary to further establish its clinical validity and safety. PMID:25788816

  1. Sensitivity of CO2 Simulation in a GCM to the Convective Transport Algorithms

    NASA Technical Reports Server (NTRS)

    Zhu, Z.; Pawson, S.; Collatz, G. J.; Gregg, W. W.; Kawa, S. R.; Baker, D.; Ott, L.

    2014-01-01

    Convection plays an important role in the transport of heat, moisture and trace gases. In this study, we simulated CO2 concentrations with an atmospheric general circulation model (GCM). Three different convective transport algorithms were used. One is a modified Arakawa-Shubert scheme that was native to the GCM; two others used in two off-line chemical transport models (CTMs) were added to the GCM here for comparison purposes. Advanced CO2 surfaced fluxes were used for the simulations. The results were compared to a large quantity of CO2 observation data. We find that the simulation results are sensitive to the convective transport algorithms. Overall, the three simulations are quite realistic and similar to each other in the remote marine regions, but are significantly different in some land regions with strong fluxes such as Amazon and Siberia during the convection seasons. Large biases against CO2 measurements are found in these regions in the control run, which uses the original GCM. The simulation with the simple diffusive algorithm is better. The difference of the two simulations is related to the very different convective transport speed.

  2. Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2008-01-01

    Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.

  3. Verifying Algorithms for Autonomous Aircraft by Simulation Generalities and Example

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2010-01-01

    An open question in Air Traffic Management is what procedures can be validated by simulation where the simulation shows that the probability of undesirable events is below the required level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first examines the probabilistic interpretation of a typical requirement by a regulatory agency and computes the number of trials needed to establish the requirement at an equivalent confidence level. Since any simulation is likely to consider only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. The paper establishes a separation algorithm at the required confidence level where the aircraft operates under feedback control as is subject to perturbations. There is a discussion where it is shown that a scenario three of four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.

  4. An adaptive multi-level simulation algorithm for stochastic biological systems

    NASA Astrophysics Data System (ADS)

    Lester, C.; Yates, C. A.; Giles, M. B.; Baker, R. E.

    2015-01-01

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  5. An adaptive multi-level simulation algorithm for stochastic biological systems

    SciTech Connect

    Lester, C. Giles, M. B.; Baker, R. E.; Yates, C. A.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  6. A piloted simulator evaluation of a ground-based 4D descent advisor algorithm

    NASA Technical Reports Server (NTRS)

    Green, Steven M.; Davis, Thomas J.; Erzberger, Heinz

    1987-01-01

    A ground-based, four-dimensional (4D) descent-advisor algorithm is under development at NASA Ames Research Center. The algorithm combines detailed aerodynamic, propulsive, and atmospheric models with an efficient numerical integration scheme to generate 4D descent advisories. This paper investigates the ability of the 4D descent advisor algorithm to provide adequate control of arrival time for aircraft not equipped with on-board 4D guidance systems. A piloted simulation was conducted to determine the precision with which the descent advisor could predict the 4D trajectories of typical straight-in descents flown by airline pilots under different wind conditions. The effects of errors in the estimation of wind and initial aircraft weight were also studied. A description of the descent advisor as well as the results of the simulation studies are presented.

  7. A piloted simulator evaluation of a ground-based 4-D descent advisor algorithm

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Green, Steven M.; Erzberger, Heinz

    1990-01-01

    A ground-based, four dimensional (4D) descent-advisor algorithm is under development at NASA-Ames. The algorithm combines detailed aerodynamic, propulsive, and atmospheric models with an efficient numerical integration scheme to generate 4D descent advisories. The ability is investigated of the 4D descent advisor algorithm to provide adequate control of arrival time for aircraft not equipped with on-board 4D guidance systems. A piloted simulation was conducted to determine the precision with which the descent advisor could predict the 4D trajectories of typical straight-in descents flown by airline pilots under different wind conditions. The effects of errors in the estimation of wind and initial aircraft weight were also studied. A description of the descent advisor as well as the result of the simulation studies are presented.

  8. R-leaping: Accelerating the stochastic simulation algorithm by reaction leaps

    NASA Astrophysics Data System (ADS)

    Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros

    2006-08-01

    A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.

  9. Efficient parallel algorithm for statistical ion track simulations in crystalline materials

    NASA Astrophysics Data System (ADS)

    Jeon, Byoungseon; Grønbech-Jensen, Niels

    2009-02-01

    We present an efficient parallel algorithm for statistical Molecular Dynamics simulations of ion tracks in solids. The method is based on the Rare Event Enhanced Domain following Molecular Dynamics (REED-MD) algorithm, which has been successfully applied to studies of, e.g., ion implantation into crystalline semiconductor wafers. We discuss the strategies for parallelizing the method, and we settle on a host-client type polling scheme in which a multiple of asynchronous processors are continuously fed to the host, which, in turn, distributes the resulting feed-back information to the clients. This real-time feed-back consists of, e.g., cumulative damage information or statistics updates necessary for the cloning in the rare event algorithm. We finally demonstrate the algorithm for radiation effects in a nuclear oxide fuel, and we show the balanced parallel approach with high parallel efficiency in multiple processor configurations.

  10. Kinetic simulation of fiber amplifier based on parallelizable and bidirectional algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Haihuan; Yang, Huanbi; Wu, Wenhan

    2015-10-01

    The simulation of light waves propagating in fibers oppositely has to handle the extremely huge volume of data when employing sequential and unidirectional methods, where the simulation is in a coordinate system that moves along with the light waves. Therefore, alternative simulation algorithm should be used when calculating counter propagating light waves. Parallelizable and bidirectional (PB) algorithm simulates the light waves matching in time domain instead of space domain, does not need iteration, and permits efficient parallelization on multiple processors. The PB method is proposed to calculate the propagation of dispersing Gaussian pulse and a bit stream in fibers. However, PB method also has apparent advantages when simulating pulses in fiber laser amplifiers, which has not been investigated detailed yet. In this paper, we perform the simulation of pulses in a rare-earth-ions doped fiber amplifier. The influence of pump power, signal power, repetition rate, pulse width and fiber length on the amplifier's output average power, peak power, pulse energy and pulse shape are investigated. The results indicate that the PB method is effective when simulating high power amplification of pulses in fiber amplifier. Furthermore, nonlinear effects can be added into the simulation conveniently. The work in this paper will provide a more economic and efficient method to simulate power amplification of fiber lasers.

  11. Monte Carlo algorithm for simulating fermions on Lefschetz thimbles

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo

    2016-01-01

    A possible solution of the notorious sign problem preventing direct Monte Carlo calculations for systems with nonzero chemical potential is to deform the integration region in the complex plane to a Lefschetz thimble. We investigate this approach for a simple fermionic model. We introduce an easy to implement Monte Carlo algorithm to sample the dominant thimble. Our algorithm relies only on the integration of the gradient flow in the numerically stable direction, which gives it a distinct advantage over the other proposed algorithms. We demonstrate the stability and efficiency of the algorithm by applying it to an exactly solvable fermionic model and compare our results with the analytical ones. We report a very good agreement for a certain region in the parameter space where the dominant contribution comes from a single thimble, including a region where standard methods suffer from a severe sign problem. However, we find that there are also regions in the parameter space where the contribution from multiple thimbles is important, even in the continuum limit.

  12. The design and results of an algorithm for intelligent ground vehicles

    NASA Astrophysics Data System (ADS)

    Duncan, Matthew; Milam, Justin; Tote, Caleb; Riggins, Robert N.

    2010-01-01

    This paper addresses the design, design method, test platform, and test results of an algorithm used in autonomous navigation for intelligent vehicles. The Bluefield State College (BSC) team created this algorithm for its 2009 Intelligent Ground Vehicle Competition (IGVC) robot called Anassa V. The BSC robotics team is comprised of undergraduate computer science, engineering technology, marketing students, and one robotics faculty advisor. The team has participated in IGVC since the year 2000. A major part of the design process that the BSC team uses each year for IGVC is a fully documented "Post-IGVC Analysis." Over the nine years since 2000, the lessons the students learned from these analyses have resulted in an ever-improving, highly successful autonomous algorithm. The algorithm employed in Anassa V is a culmination of past successes and new ideas, resulting in Anassa V earning several excellent IGVC 2009 performance awards, including third place overall. The paper will discuss all aspects of the design of this autonomous robotic system, beginning with the design process and ending with test results for both simulation and real environments.

  13. On constructing optimistic simulation algorithms for the discrete event system specification

    SciTech Connect

    Nutaro, James J

    2008-01-01

    This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models.

  14. Numerical simulations of catastrophic disruption: Recent results

    NASA Technical Reports Server (NTRS)

    Benz, W.; Asphaug, E.; Ryan, E. V.

    1994-01-01

    Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.

  15. Humanoid robot gait optimization: Stretched simulated annealing and genetic algorithm a comparative study

    NASA Astrophysics Data System (ADS)

    Pereira, Ana I.; Lima, José; Costa, Paulo

    2013-10-01

    There are several approaches to create the Humanoid robot gait planning. This problem presents a large number of unknown parameters that should be found to make the humanoid robot to walk. Optimization in simulation models can be used to find the gait based on several criteria such as energy minimization, acceleration, step length among the others. The presented paper addresses a comparison between two optimization methods, the Stretched Simulated Annealing and the Genetic Algorithm, that runs in an accurate and stable simulation model. Final results show the comparative study and demonstrate that optimization is a valid gait planning technique.

  16. Evaluation of effective-stress-function algorithm for nuclear fuel simulation

    SciTech Connect

    Kim, H. C.; Yang, Y. S.; Koo, Y. H.

    2013-07-01

    In a pressurized water reactor (PWR), the mechanical integrity of nuclear fuel is the most critical issue as it is an important barrier for fission products released into the environment. The integrity of zirconium cladding that surrounds uranium oxide can be threatened during off-normal operation owing to a pellet-cladding mechanical interaction (PCMI). To analyze the fuel and cladding behavior during off-operation, the fuel performance code should calculate an inelastic analysis in two - or three-dimensional calculations. In this paper, the effective stress function (ESF) algorithm based on a two-dimensional FE module has been implemented to simulate the inelastic behavior of the cladding with stability and accuracy. The ESF algorithm solves the governing equations of the inelastic constitutive behavior by calculating the zero of the appropriate effective-stress-function. To verify the accuracy of the ESF algorithm for an inelastic analysis, a code-to-code benchmark was performed using the commercial FE code, ANSYS 13.0. To demonstrate the stability and convergence of the implemented algorithm, the number of iterations in the ESF algorithm was compared with that in a sequential algorithm in the case of an inelastic problem. Consequently, the evaluation results demonstrate that the implemented ESF algorithm improves the efficiency of the computation without a loss of accuracy for an inelastic analysis. (authors)

  17. A real-time simulation evaluation of an advanced detection, isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  18. A real-time simulation evaluation of an advanced detection. Isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  19. Fast Plasma Instrument for MMS: Simulation Results

    NASA Technical Reports Server (NTRS)

    Figueroa-Vinas, Adolfo; Adrian, Mark L.; Lobell, James V.; Simpson, David G.; Barrie, Alex; Winkert, George E.; Yeh, Pen-Shu; Moore, Thomas E.

    2008-01-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. The Dual Electron Spectrometer (DES) of the Fast Plasma Instrument (FPI) for MMS meets these demanding requirements by acquiring the electron velocity distribution functions (VDFs) for the full sky with high-resolution angular measurements every 30 ms. This will provide unprecedented access to electron scale dynamics within the reconnection diffusion region. The DES consists of eight half-top-hat energy analyzers. Each analyzer has a 6 deg. x 11.25 deg. Full-sky coverage is achieved by electrostatically stepping the FOV of each of the eight sensors through four discrete deflection look directions. Data compression and burst memory management will provide approximately 30 minutes of high time resolution data during each orbit of the four MMS spacecraft. Each spacecraft will intelligently downlink the data sequences that contain the greatest amount of temporal structure. Here we present the results of a simulation of the DES analyzer measurements, data compression and decompression, as well as ground-based analysis using as a seed re-processed Cluster/PEACE electron measurements. The Cluster/PEACE electron measurements have been reprocessed through virtual DES analyzers with their proper geometrical, energy, and timing scale factors and re-mapped via interpolation to the DES angular and energy phase-space sampling measurements. The results of the simulated DES measurements are analyzed and the full moments of the simulated VDFs are compared with those obtained from the Cluster/PEACE spectrometer using a standard quadrature moment, a newly implemented spectral spherical harmonic method, and a singular value decomposition method. Our preliminary moment calculations show a remarkable agreement within the uncertainties of the measurements, with the

  20. Application of Simulated Annealing and Related Algorithms to TWTA Design

    NASA Technical Reports Server (NTRS)

    Radke, Eric M.

    2004-01-01

    Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is

  1. Simulating Future GPS Clock Scenarios with Two Composite Clock Algorithms

    NASA Technical Reports Server (NTRS)

    Suess, Matthias; Matsakis, Demetrios; Greenhall, Charles A.

    2010-01-01

    Using the GPS Toolkit, the GPS constellation is simulated using 31 satellites (SV) and a ground network of 17 monitor stations (MS). At every 15-minutes measurement epoch, the monitor stations measure the time signals of all satellites above a parameterized elevation angle. Once a day, the satellite clock estimates the station and satellite clocks. The first composite clock (B) is based on the Brown algorithm, and is now used by GPS. The second one (G) is based on the Greenhall algorithm. The composite clock of G and B performance are investigated using three ground-clock models. Model C simulates the current GPS configuration, in which all stations are equipped with cesium clocks, except for masers at USNO and Alternate Master Clock (AMC) sites. Model M is an improved situation in which every station is equipped with active hydrogen masers. Finally, Models F and O are future scenarios in which the USNO and AMC stations are equipped with fountain clocks instead of masers. Model F is a rubidium fountain, while Model O is more precise but futuristic Optical Fountain. Each model is evaluated using three performance metrics. The timing-related user range error having all satellites available is the first performance index (PI1). The second performance index (PI2) relates to the stability of the broadcast GPS system time itself. The third performance index (PI3) evaluates the stability of the time scales computed by the two composite clocks. A distinction is made between the "Signal-in-Space" accuracy and that available through a GNSS receiver.

  2. Algorithm for loading shot noise microbunching in multi-dimensional, free-electron laser simulation codes

    SciTech Connect

    Fawley, William M.

    2002-03-25

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  3. Algorithm for loading shot noise microbunching in multidimensional, free-electron laser simulation codes

    NASA Astrophysics Data System (ADS)

    Fawley, William M.

    2002-07-01

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multidimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  4. The fast simulated annealing algorithm applied to the search problem in LEED

    NASA Astrophysics Data System (ADS)

    Nascimento, V. B.; de Carvalho, V. E.; de Castilho, C. M. C.; Costa, B. V.; Soares, E. A.

    2001-07-01

    In this work we present new results obtained from the application of the fast simulated algorithm (FSA) to the surface structure determination of the Ag(1 1 0) and CdTe(1 1 0) systems. The influence of a control parameter, the "initial temperature", on the FSA search process was investigated. A scaling behaviour, that measures the efficiency of a search method as a function of the number of parameters to be varied, was obtained for the FSA algorithm, and indicated a favourable linear scaling ( N1).

  5. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    NASA Astrophysics Data System (ADS)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  6. Decoding algorithms and spatial resolution Monte Carlo simulation of cross strip anode for UV astronomy

    NASA Astrophysics Data System (ADS)

    Deng, Guobao; Zhu, Xiangping

    2015-02-01

    The development decoding algorithms of two-dimensional cross strip anodes image readouts for applications in UV astronomy are described. We present results with Monte Carlo simulation by GEANT4 toolkit, the results show that when the cross strip anode period is 0.5mm and the electrode width is 0.4mm, the spatial resolution accuracy is sufficient to reach better than 5 μm, the temporal resolution accuracy of the event detection can be as low as 100 ps. The influences of the cross strip detector parameters, such as the anode period, the width of anode fingers (electrode), the width of the charge footprint at the anode (determined by the distance and the field between the MCP and the anode), the gain of the MCP and equivalent noise charge (ENC) are also discussed. The development decoding algorithms and simulation results can be useful for the designing and performance improvement of future photon counting imaging detectors for UV Astronomy.

  7. Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations

    SciTech Connect

    Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer

    2013-09-01

    Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both

  8. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  9. Experiences with serial and parallel algorithms for channel routing using simulated annealing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1988-01-01

    Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.

  10. A method for data handling numerical results in parallel OpenFOAM simulations

    NASA Astrophysics Data System (ADS)

    Anton, Alin; Muntean, Sebastian

    2015-12-01

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit®[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  11. A method for data handling numerical results in parallel OpenFOAM simulations

    SciTech Connect

    Anton, Alin; Muntean, Sebastian

    2015-12-31

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  12. Robotic space simulation integration of vision algorithms into an orbital operations simulation

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.

    1987-01-01

    In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.

  13. Simulation and optimization of a pulsating heat pipe using artificial neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad

    2016-01-01

    In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.

  14. An improved real-time endovascular guidewire position simulation using shortest path algorithm.

    PubMed

    Qiu, Jianpeng; Qu, Zhiyi; Qiu, Haiquan; Zhang, Xiaomin

    2016-09-01

    In this study, we propose a new graph-theoretical method to simulate guidewire paths inside the carotid artery. The minimum energy guidewire path can be obtained by applying the shortest path algorithm, such as Dijkstra's algorithm for graphs, based on the principle of the minimal total energy. Compared to previous results, experiments of three phantoms were validated, revealing that the first and second phantoms overlap completely between simulated and real guidewires. In addition, 95 % of the third phantom overlaps completely, and the remaining 5 % closely coincides. The results demonstrate that our method achieves 87 and 80 % improvements for the first and third phantoms under the same conditions, respectively. Furthermore, 91 % improvements were obtained for the second phantom under the condition with reduced graph construction complexity. PMID:26467345

  15. Medical Simulation Practices 2010 Survey Results

    NASA Technical Reports Server (NTRS)

    McCrindle, Jeffrey J.

    2011-01-01

    Medical Simulation Centers are an essential component of our learning infrastructure to prepare doctors and nurses for their careers. Unlike the military and aerospace simulation industry, very little has been published regarding the best practices currently in use within medical simulation centers. This survey attempts to provide insight into the current simulation practices at medical schools, hospitals, university nursing programs and community college nursing programs. Students within the MBA program at Saint Joseph's University conducted a survey of medical simulation practices during the summer 2010 semester. A total of 115 institutions responded to the survey. The survey resus discuss overall effectiveness of current simulation centers as well as the tools and techniques used to conduct the simulation activity

  16. The small-voxel tracking algorithm for simulating chemical reactions among diffusing molecules

    SciTech Connect

    Gillespie, Daniel T. Gillespie, Carol A.; Seitaridou, Effrosyni

    2014-12-21

    Simulating the evolution of a chemically reacting system using the bimolecular propensity function, as is done by the stochastic simulation algorithm and its reaction-diffusion extension, entails making statistically inspired guesses as to where the reactant molecules are at any given time. Those guesses will be physically justified if the system is dilute and well-mixed in the reactant molecules. Otherwise, an accurate simulation will require the extra effort and expense of keeping track of the positions of the reactant molecules as the system evolves. One molecule-tracking algorithm that pays careful attention to the physics of molecular diffusion is the enhanced Green's function reaction dynamics (eGFRD) of Takahashi, Tănase-Nicola, and ten Wolde [Proc. Natl. Acad. Sci. U.S.A. 107, 2473 (2010)]. We introduce here a molecule-tracking algorithm that has the same theoretical underpinnings and strategic aims as eGFRD, but a different implementation procedure. Called the small-voxel tracking algorithm (SVTA), it combines the well known voxel-hopping method for simulating molecular diffusion with a novel procedure for rectifying the unphysical predictions of the diffusion equation on the small spatiotemporal scale of molecular collisions. Indications are that the SVTA might be more computationally efficient than eGFRD for the problematic class of non-dilute systems. A widely applicable, user-friendly software implementation of the SVTA has yet to be developed, but we exhibit some simple examples which show that the algorithm is computationally feasible and gives plausible results.

  17. The small-voxel tracking algorithm for simulating chemical reactions among diffusing molecules

    NASA Astrophysics Data System (ADS)

    Gillespie, Daniel T.; Seitaridou, Effrosyni; Gillespie, Carol A.

    2014-12-01

    Simulating the evolution of a chemically reacting system using the bimolecular propensity function, as is done by the stochastic simulation algorithm and its reaction-diffusion extension, entails making statistically inspired guesses as to where the reactant molecules are at any given time. Those guesses will be physically justified if the system is dilute and well-mixed in the reactant molecules. Otherwise, an accurate simulation will require the extra effort and expense of keeping track of the positions of the reactant molecules as the system evolves. One molecule-tracking algorithm that pays careful attention to the physics of molecular diffusion is the enhanced Green's function reaction dynamics (eGFRD) of Takahashi, Tănase-Nicola, and ten Wolde [Proc. Natl. Acad. Sci. U.S.A. 107, 2473 (2010)]. We introduce here a molecule-tracking algorithm that has the same theoretical underpinnings and strategic aims as eGFRD, but a different implementation procedure. Called the small-voxel tracking algorithm (SVTA), it combines the well known voxel-hopping method for simulating molecular diffusion with a novel procedure for rectifying the unphysical predictions of the diffusion equation on the small spatiotemporal scale of molecular collisions. Indications are that the SVTA might be more computationally efficient than eGFRD for the problematic class of non-dilute systems. A widely applicable, user-friendly software implementation of the SVTA has yet to be developed, but we exhibit some simple examples which show that the algorithm is computationally feasible and gives plausible results.

  18. Full-scale engine demonstration of an advanced sensor failure detection, isolation and accommodation algorithm: Preliminary results

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    1987-01-01

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  19. Full-scale engine demonstration of an advanced sensor failure detection, isolation and accommodation algorithm: Preliminary results

    NASA Astrophysics Data System (ADS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  20. Full-scale engine demonstration of an advanced sensor failure detection isolation, and accommodation algorithm - Preliminary results

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Kroszkewicz, Steven M.; Abdelwahab, Mahmood

    1987-01-01

    The objective of the advanced detection, isolation, and accommodation (ADIA) program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, algorithms were developed which detect, isolate, and accommodate sensor failures using analytical redundancy. Preliminary results of a full scale engine demonstration of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 turbofan engine control system are determined and compared to those obtained during a previous evaluation of this algorithm using a real-time hybrid computer simulation of the engine.

  1. Efficient photoheating algorithms in time-dependent photoionization simulations

    NASA Astrophysics Data System (ADS)

    Lee, Kai-Yan; Mellema, Garrelt; Lundqvist, Peter

    2016-02-01

    We present an extension to the time-dependent photoionization code C2-RAY to calculate photoheating in an efficient and accurate way. In C2-RAY, the thermal calculation demands relatively small time-steps for accurate results. We describe two novel methods to reduce the computational cost associated with small time-steps, namely, an adaptive time-step algorithm and an asynchronous evolution approach. The adaptive time-step algorithm determines an optimal time-step for the next computational step. It uses a fast ray-tracing scheme to quickly locate the relevant cells for this determination and only use these cells for the calculation of the time-step. Asynchronous evolution allows different cells to evolve with different time-steps. The asynchronized clocks of the cells are synchronized at the times where outputs are produced. By only evolving cells which may require short time-steps with these short time-steps instead of imposing them to the whole grid, the computational cost of the calculation can be substantially reduced. We show that our methods work well for several cosmologically relevant test problems and validate our results by comparing to the results of another time-dependent photoionization code.

  2. Interhemispheric Field-Aligned Currents: Simulation Results

    NASA Astrophysics Data System (ADS)

    Lyatsky, Sonya

    2016-04-01

    We present simulation results of the 3-D magnetosphere-ionosphere current system including the Region 1, Region 2, and interhemispheric (IHC) field-aligned currents flowing between the Northern and Southern conjugate ionospheres in the case of asymmetry in ionospheric conductivities in two hemispheres (observed, for instance, during the summer-winter seasons). We also computed the maps of ionospheric and equivalent ionospheric currents in two hemispheres. The IHCs are an important part of the global 3-D current system in high-latitude ionospheres. These currents are especially significant during summer and winter months. In the winter ionosphere, they may be comparable and even exceed both Region 1 and Region 2 field-aligned currents. An important feature of these interhemispheric currents is that they link together processes in two hemispheres, so that the currents observed in one hemisphere can provide us with information about the currents in the opposite hemisphere. Despite the significant role of these IHCs in the global 3-D current system, they have not been sufficiently studied yet. The main results of our research may be summarized as follows: 1) In winter hemisphere, the IHCs may significantly exceed and be a substitute for the local Region 1 and Region 2 currents; 2) The IHCs may strongly affect the magnitude, location, and direction of the ionospheric and equivalent ionospheric currents (especially in the nightside winter auroral ionosphere). 3) The IHCs in winter hemisphere may be, in fact, an important (and sometimes even major) source of the Westward Auroral Electrojet, observed in both hemispheres during substorm activity. The study of the contribution from the IHCs into the total global 3-D current system allows us to improve the understanding and forecasting of geomagnetic, auroral, and ionospheric disturbances in two hemispheres. The results of our studies of the Interhemispheric currents are presented in papers: (note: for publications my last

  3. SALTSTONE MATRIX CHARACTERIZATION AND STADIUM SIMULATION RESULTS

    SciTech Connect

    Langton, C.

    2009-07-30

    SIMCO Technologies, Inc. was contracted to evaluate the durability of the saltstone matrix material and to measure saltstone transport properties. This information will be used to: (1) Parameterize the STADIUM{reg_sign} service life code, (2) Predict the leach rate (degradation rate) for the saltstone matrix over 10,000 years using the STADIUM{reg_sign} concrete service life code, and (3) Validate the modeled results by conducting leaching (water immersion) tests. Saltstone durability for this evaluation is limited to changes in the matrix itself and does not include changes in the chemical speciation of the contaminants in the saltstone. This report summarized results obtained to date which include: characterization data for saltstone cured up to 365 days and characterization of saltstone cured for 137 days and immersed in water for 31 days. Chemicals for preparing simulated non-radioactive salt solution were obtained from chemical suppliers. The saltstone slurry was mixed according to directions provided by SRNL. However SIMCO Technologies Inc. personnel made a mistake in the premix proportions. The formulation SIMCO personnel used to prepare saltstone premix was not the reference mix proportions: 45 wt% slag, 45 wt% fly ash, and 10 wt% cement. SIMCO Technologies Inc. personnel used the following proportions: 21 wt% slag, 65 wt% fly ash, and 14 wt% cement. The mistake was acknowledged and new mixes have been prepared and are curing. The results presented in this report are assumed to be conservative since the excessive fly ash was used in the SIMCO saltstone. The SIMCO mixes are low in slag which is very reactive in the caustic salt solution. The impact is that the results presented in this report are expected to be conservative since the samples prepared were deficient in slag and contained excess fly ash. The hydraulic reactivity of slag is about four times that of fly ash so the amount of hydrated binder formed per unit volume in the SIMCO saltstone samples is

  4. A pencil beam algorithm for intensity modulated proton therapy derived from Monte Carlo simulations.

    PubMed

    Soukup, Martin; Fippel, Matthias; Alber, Markus

    2005-11-01

    A pencil beam algorithm as a component of an optimization algorithm for intensity modulated proton therapy (IMPT) is presented. The pencil beam algorithm is tuned to the special accuracy requirements of IMPT, where in heterogeneous geometries both the position and distortion of the Bragg peak and the lateral scatter pose problems which are amplified by the spot weight optimization. Heterogeneity corrections are implemented by a multiple raytracing approach using fluence-weighted sub-spots. In order to derive nuclear interaction corrections, Monte Carlo simulations were performed. The contribution of long ranged products of nuclear interactions is taken into account by a fit to the Monte Carlo results. Energy-dependent stopping power ratios are also implemented. Scatter in optional beam line accessories such as range shifters or ripple filters is taken into account. The collimator can also be included, but without additional scattering. Finally, dose distributions are benchmarked against Monte Carlo simulations, showing 3%/1 mm agreement for simple heterogeneous phantoms. In the case of more complicated phantoms, principal shortcomings of pencil beam algorithms are evident. The influence of these effects on IMPT dose distributions is shown in clinical examples. PMID:16237243

  5. Optimized simulations of Olami-Feder-Christensen systems using parallel algorithms

    NASA Astrophysics Data System (ADS)

    Dominguez, Rachele; Necaise, Rance; Montag, Eric

    The sequential nature of the Olami-Feder-Christensen (OFC) model for earthquake simulations limits the benefits of parallel computing approaches because of the frequent communication required between processors. We developed a parallel version of the OFC algorithm for multi-core processors. Our data, even for relatively small system sizes and low numbers of processors, indicates that increasing the number of processors provides significantly faster simulations; producing more efficient results than previous attempts that used network-based Beowulf clusters. Our algorithm optimizes performance by exploiting the multi-core processor architecture, minimizing communication time in contrast to the networked Beowulf-cluster approaches. Our multi-core algorithm is the basis for a new algorithm using GPUs that will drastically increase the number of processors available. Previous studies incorporating realistic structural features of faults into OFC models have revealed spatial and temporal patterns observed in real earthquake systems. The computational advances presented here will allow for studying interacting networks of faults, rather than individual faults, further enhancing our understanding of the relationship between the earth's structure and the triggering process. Support for this project comes from the Chenery Research Fund, the Rashkind Family Endowment, the Walter Williams Craigie Teaching Endowment, and the Schapiro Undergraduate Research Fellowship.

  6. Protein folding simulations of the hydrophobic-hydrophilic model by combining tabu search with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Jiang, Tianzi; Cui, Qinghua; Shi, Guihua; Ma, Songde

    2003-08-01

    In this paper, a novel hybrid algorithm combining genetic algorithms and tabu search is presented. In the proposed hybrid algorithm, the idea of tabu search is applied to the crossover operator. We demonstrate that the hybrid algorithm can be applied successfully to the protein folding problem based on a hydrophobic-hydrophilic lattice model. The results show that in all cases the hybrid algorithm works better than a genetic algorithm alone. A comparison with other methods is also made.

  7. Wastewater neutralization control based in fuzzy logic: Simulation results

    SciTech Connect

    Garrido, R.; Adroer, M.; Poch, M.

    1997-05-01

    Neutralization is a technique widely used as a part of wastewater treatment processes. Due to the importance of this technique, extensive study has been devoted to its control. However, industrial wastewater neutralization control is a procedure with a lot of problems--nonlinearity of the titration curve, variable buffering, changes in loading--and despite the efforts devoted to this subject, the problem has not been totally solved. in this paper, the authors present the development of a controller based in fuzzy logic (FLC). In order to study its effectiveness, it has been compared, by simulation, with other advanced controllers (using identification techniques and adaptive control algorithms using reference models) when faced with various types of wastewater with different buffer capacity or when changes in the concentration of the acid present in the wastewater take place. Results obtained show that FLC could be considered as a powerful alternative for wastewater neutralization processes.

  8. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-01

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus. PMID:26575558

  9. Results of a new polarization simulation

    NASA Astrophysics Data System (ADS)

    Fetrow, Matthew P.; Wellems, David; Sposato, Stephanie H.; Bishop, Kenneth P.; Caudill, Thomas R.; Davis, Michael L.; Simrell, Elizabeth R.

    2002-01-01

    Including polarization signatures of material samples in passive sensing may enhance target detection capabilities. To obtain more information on this potential improvement, a simulation is being developed to aid in interpreting IR polarization measurements in a complex environment. The simulation accounts for the background, or incident illumination, and the scattering and emission from the target into the sensor. MODTRAN, in combination with a dipole approximation to singly scattered radiance, is used to polarimetrically model the background, or sky conditions. The scattering and emission from rough surfaces are calculated using an energy conserving polarimetric Torrance and Sparrow BRDF model. The simulation can be used to examine the surface properties of materials in a laboratory environment, to investigate IR polarization signatures in the field, or a complex environment, and to predict trends in LWIR polarization data. In this paper we discuss the simulation architecture, the process for determining and roughness as a function of wavelength, which involves making polarization measurements of flat glass plates at various angles and temperatures in the laboratory at Kirtland AF Base, and the comparison of the simulation with field dat taken at Elgin Air Force Base. The later process entails using the extrapolated index of refraction and surface roughness, and a polarimetric incident sky dome generated by MODTRAN. We also present some parametric studies in which the sky condition, the sky temperature and the sensor declination angle were all varied.

  10. MUlti-Dimensional Spline-Based Estimator (MUSE) for Motion Estimation: Algorithm Development and Initial Results

    PubMed Central

    Viola, Francesco; Coe, Ryan L.; Owen, Kevin; Guenther, Drake A.; Walker, William F.

    2008-01-01

    Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multidimensional displacements/strain components from multidimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 × 10−4 samples in range and 2.2 × 10−3 samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 × 10−3 samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE

  11. Blocking Moving Window algorithm: Conditioning multiple-point simulations to hydrogeological data

    NASA Astrophysics Data System (ADS)

    Alcolea, Andres; Renard, Philippe

    2010-08-01

    Connectivity constraints and measurements of state variables contain valuable information on aquifer architecture. Multiple-point (MP) geostatistics allow one to simulate aquifer architectures, presenting a predefined degree of global connectivity. In this context, connectivity data are often disregarded. The conditioning to state variables is usually carried out by minimizing a suitable objective function (i.e., solving an inverse problem). However, the discontinuous nature of lithofacies distributions and of the corresponding objective function discourages the use of traditional sensitivity-based inversion techniques. This work presents the Blocking Moving Window algorithm (BMW), aimed at overcoming these limitations by conditioning MP simulations to hydrogeological data such as connectivity and heads. The BMW evolves iteratively until convergence: (1) MP simulation of lithofacies from geological/geophysical data and connectivity constraints, where only a random portion of the domain is simulated at every iteration (i.e., the blocking moving window, whose size is user-defined); (2) population of hydraulic properties at the intrafacies; (3) simulation of state variables; and (4) acceptance or rejection of the MP simulation depending on the quality of the fit of measured state variables. The outcome is a stack of MP simulations that (1) resemble a prior geological model depicted by a training image, (2) honor lithological data and connectivity constraints, (3) correlate with geophysical data, and (4) fit available measurements of state variables well. We analyze the performance of the algorithm on a 2-D synthetic example. Results show that (1) the size of the blocking moving window controls the behavior of the BMW, (2) conditioning to state variable data enhances dramatically the initial simulation (which accounts for geological/geophysical data only), and (3) connectivity constraints speed up the convergence but do not enhance the stack if the number of iterations

  12. Material growth in thermoelastic continua: Theory, algorithmics, and simulation

    NASA Astrophysics Data System (ADS)

    Vignes, Chet Monroe

    Within the medical community, there has been increasing interest in understanding material growth in biomaterials. Material growth is the capability of a biomaterial to gain or lose mass. This research interest is driven by the host of health implications and medical problems related to this unique biomaterial property. Health providers are keen to understand the role of growth in healing and recovery so that surgical techniques, medical procedures, and physical therapy may be designed and implemented to stimulate healing and minimize recovery time. With this motivation, research seeks to identify and model mechanisms of material growth as well as growth-inducing factors in biomaterials. To this end, a theoretical formulation of stress-induced volumetric material growth in thermoelastic continua is developed. The theory derives, without the classical continuum mechanics assumption of mass conservation, the balance laws governing the mechanics of solids capable of growth. Also, a proposed extension of classical thermodynamic theory provides a foundation for developing general constitutive relations. The theory is consistent in the sense that classical thermoelastic continuum theory is embedded as a special case. Two growth mechanisms, a kinematic and a constitutive contribution, coupled in the most general case of growth, are identified. This identification allows for the commonly employed special cases of density-preserving growth and volume-preserving growth to be easily recovered. In the theory, material growth is regulated by a three-surface activation criterion and corresponding flow rules. A simple model for rate-independent finite growth is proposed based on this formulation. The associated algorithmic implementation, including a method for solving the underlying differential/algebraic equations for growth, is examined in the context of an implicit finite element method. Selected numerical simulations are presented that showcase the predictive capacity of the

  13. MODA: a new algorithm to compute optical depths in multidimensional hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Perego, Albino; Gafton, Emanuel; Cabezón, Rubén; Rosswog, Stephan; Liebendörfer, Matthias

    2014-08-01

    Aims: We introduce the multidimensional optical depth algorithm (MODA) for the calculation of optical depths in approximate multidimensional radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Methods: Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any predefined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we use a tree structure that is otherwise used for searching neighbors and calculating gravity. Results: In a series of numerical experiments, we compare the MODA results with analytically known solutions. We also use snapshots from actual 3D simulations and compare the results of MODA with those obtained with other methods, such as the global and local ray-by-ray method. It turns out that MODA achieves excellent accuracy at a moderate computational cost. In appendix we also discuss implementation details and parallelization strategies.

  14. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  15. Modeling and Simulation of Water Allocation System Based on Simulated Annealing Hybrid Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Jiulong; Wang, Shijun

    Presently water resource in most watersheds in China is distributed in terms of administrative instructions. This kind of allocation method has many disadvantages and hampers the instructional effect of market mechanism on water allocation. The paper studies South-to-North Water Transfer Project and discusses water allocation of the node lakes along the Project. Firstly, it advanced four assumptions. Secondly, it analyzed constraint conditions of water allocation in terms of present state of water allocation in China. Thirdly, it established a goal model of water allocation and set up a systematic model from the angle of comprehensive profits of water utilization and profits of the node lakes. Fourthly, it discussed calculation method of the model by means of Simulated Annealing Hybrid Genetic Algorithm (SHGA). Finally, it validated the rationality and validity of the model by a simulation testing.

  16. Some Algorithms For Simulating Size-resolved Aerosol Dynamics Models

    NASA Astrophysics Data System (ADS)

    Debry, E.; Sportisse, B.

    Physics, Wiley- 1 interscience, 1998 [2] Binkowski,F.S. and Shankar,U. The regional particulate matter model : Model de- scription and preliminary results Journal of geophysical research, 1995 [3] Whitby,E.R. and McMurry,P.H. Modal Aerosol Dynamics Modeling Aerosol Sci- ence and Technology, 1997 [4] Jacobson,M.Z. and Turco,R.P. and Jensen,E.J. and Toon,O.B. Modeling coagu- lation among particles of different composition and size Atmospheric Environment, 1994, [5] Dhaniyala,S. and Wexler,A.S., Numerical schemes to model condensation and evaporation of aerosols, Atmospheric environment,1995, [6] Sandu, A. A Spectral Method for Solving Aerosol Dynamics Submitted to Applied Numerical Mathematics, August 2001 [7] Debry, E. and Jourdain, B. and Sportisse, B. Modelling aerosol dynamics : a stochastic algorithm article in preparation, 2001

  17. Design and simulation of imaging algorithm for Fresnel telescopy imaging system

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-yu; Liu, Li-ren; Yan, Ai-min; Sun, Jian-feng; Dai, En-wen; Li, Bing

    2011-06-01

    Fresnel telescopy (short for Fresnel telescopy full-aperture synthesized imaging ladar) is a new high resolution active laser imaging technique. This technique is a variant of Fourier telescopy and optical scanning holography, which uses Fresnel zone plates to scan target. Compare with synthetic aperture imaging ladar(SAIL), Fresnel telescopy avoids problem of time synchronization and space synchronization, which decreasing technical difficulty. In one-dimensional (1D) scanning operational mode for moving target, after time-to-space transformation, spatial distribution of sampling data is non-uniform because of the relative motion between target and scanning beam. However, as we use fast Fourier transform (FFT) in the following imaging algorithm of matched filtering, distribution of data should be regular and uniform. We use resampling interpolation to transform the data into two-dimensional (2D) uniform distribution, and accuracy of resampling interpolation process mainly affects the reconstruction results. Imaging algorithms with different resampling interpolation algorithms have been analysis and computer simulation are also given. We get good reconstruction results of the target, which proves that the designed imaging algorithm for Fresnel telescopy imaging system is effective. This work is found to have substantial practical value and offers significant benefit for high resolution imaging system of Fresnel telescopy laser imaging ladar.

  18. A fast algorithm for voxel-based deterministic simulation of X-ray imaging

    NASA Astrophysics Data System (ADS)

    Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2008-04-01

    Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video

  19. Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.

    2008-01-01

    This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.

  20. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations

    SciTech Connect

    Nukala, Phani K. V. V.; Kent, P. R. C.

    2009-05-28

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N{sup 2}) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N{sup 2}) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN{sup 2}) work and O(MN{sup 2}) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.

  1. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Nukala, Phani K. V. V.; Kent, P. R. C.

    2009-05-01

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N2) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N2) Sherman-Morrison algorithm for up to O(N ) updates. For multideterminant configuration-interaction-type trial wave functions of M +1 determinants, the new algorithm is significantly more efficient, saving both O(MN2) work and O(MN2) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.

  2. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations.

    PubMed

    Nukala, Phani K V V; Kent, P R C

    2009-05-28

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N(2)) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N(2)) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN(2)) work and O(MN(2)) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions. PMID:19485435

  3. A Fast and efficient Algorithm for Slater Determinant Updates in Quantum Monte Carlo Simulations

    SciTech Connect

    Nukala, Phani K; Kent, Paul R

    2009-01-01

    We present an efficient low-rank updating algorithm for updating the trial wavefunctions used in Quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is $\\mathcal{O}(k N)$ during the $k$-th step compared with traditional algorithms that require $\\mathcal{O}(N^2)$ computations, where $N$ is the system size. For single determinant trial wavefunctions the new algorithm is faster than the traditional $\\mathcal{O}(N^2)$ Sherman-Morrison algorithm for up to $\\mathcal{O}(N)$ updates. For multideterminant configuration-interaction type trial wavefunctions of $M+1$ determinants, the new algorithm is significantly more efficient, saving both $\\mathcal{O}(MN^2)$ work and $\\mathcal{O}(MN^2)$ storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration interaction type wavefunctions.

  4. An algorithm for three-dimensional Monte-Carlo simulation of charge distribution at biofunctionalized surfaces

    NASA Astrophysics Data System (ADS)

    Bulyha, Alena; Heitzinger, Clemens

    2011-04-01

    In this work, a Monte-Carlo algorithm in the constant-voltage ensemble for the calculation of 3d charge concentrations at charged surfaces functionalized with biomolecules is presented. The motivation for this work is the theoretical understanding of biofunctionalized surfaces in nanowire field-effect biosensors (BioFETs). This work provides the simulation capability for the boundary layer that is crucial in the detection mechanism of these sensors; slight changes in the charge concentration in the boundary layer upon binding of analyte molecules modulate the conductance of nanowire transducers. The simulation of biofunctionalized surfaces poses special requirements on the Monte-Carlo simulations and these are addressed by the algorithm. The constant-voltage ensemble enables us to include the right boundary conditions; the dna strands can be rotated with respect to the surface; and several molecules can be placed in a single simulation box to achieve good statistics in the case of low ionic concentrations relevant in experiments. Simulation results are presented for the leading example of surfaces functionalized with pna and with single- and double-stranded dna in a sodium-chloride electrolyte. These quantitative results make it possible to quantify the screening of the biomolecule charge due to the counter-ions around the biomolecules and the electrical double layer. The resulting concentration profiles show a three-layer structure and non-trivial interactions between the electric double layer and the counter-ions. The numerical results are also important as a reference for the development of simpler screening models.

  5. Different genetic algorithms and the evolution of specialization: a study with groups of simulated neural robots.

    PubMed

    Ferrauto, Tomassino; Parisi, Domenico; Di Stefano, Gabriele; Baldassarre, Gianluca

    2013-01-01

    Organisms that live in groups, from microbial symbionts to social insects and schooling fish, exhibit a number of highly efficient cooperative behaviors, often based on role taking and specialization. These behaviors are relevant not only for the biologist but also for the engineer interested in decentralized collective robotics. We address these phenomena by carrying out experiments with groups of two simulated robots controlled by neural networks whose connection weights are evolved by using genetic algorithms. These algorithms and controllers are well suited to autonomously find solutions for decentralized collective robotic tasks based on principles of self-organization. The article first presents a taxonomy of role-taking and specialization mechanisms related to evolved neural network controllers. Then it introduces two cooperation tasks, which can be accomplished by either role taking or specialization, and uses these tasks to compare four different genetic algorithms to evaluate their capacity to evolve a suitable behavioral strategy, which depends on the task demands. Interestingly, only one of the four algorithms, which appears to have more biological plausibility, is capable of evolving role taking or specialization when they are needed. The results are relevant for both collective robotics and biology, as they can provide useful hints on the different processes that can lead to the emergence of specialization in robots and organisms. PMID:23514239

  6. Generalized SIMD algorithm for efficient EM-PIC simulations on modern CPUs

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo; Decyk, Viktor; Mori, Warren; Silva, Luis

    2012-10-01

    There are several relevant plasma physics scenarios where highly nonlinear and kinetic processes dominate. Further understanding of these scenarios is generally explored through relativistic particle-in-cell codes such as OSIRIS [1], but this algorithm is computationally intensive, and efficient use high end parallel HPC systems, exploring all levels of parallelism available, is required. In particular, most modern CPUs include a single-instruction-multiple-data (SIMD) vector unit that can significantly speed up the calculations. In this work we present a generalized PIC-SIMD algorithm that is shown to work efficiently with different CPU (AMD, Intel, IBM) and vector unit types (2-8 way, single/double). Details on the algorithm will be given, including the vectorization strategy and memory access. We will also present performance results for the various hardware variants analyzed, focusing on floating point efficiency. Finally, we will discuss the applicability of this type of algorithm for EM-PIC simulations on GPGPU architectures [2]. [4pt] [1] R. A. Fonseca et al., LNCS 2331, 342, (2002)[0pt] [2] V. K. Decyk, T. V. Singh; Comput. Phys. Commun. 182, 641-648 (2011)

  7. Synchronization Algorithms for Co-Simulation of Power Grid and Communication Networks

    SciTech Connect

    Ciraci, Selim; Daily, Jeffrey A.; Agarwal, Khushbu; Fuller, Jason C.; Marinovici, Laurentiu D.; Fisher, Andrew R.

    2014-09-11

    The ongoing modernization of power grids consists of integrating them with communication networks in order to achieve robust and resilient control of grid operations. To understand the operation of the new smart grid, one approach is to use simulation software. Unfortunately, current power grid simulators at best utilize inadequate approximations to simulate communication networks, if at all. Cooperative simulation of specialized power grid and communication network simulators promises to more accurately reproduce the interactions of real smart grid deployments. However, co-simulation is a challenging problem. A co-simulation must manage the exchange of informa- tion, including the synchronization of simulator clocks, between all simulators while maintaining adequate computational perfor- mance. This paper describes two new conservative algorithms for reducing the overhead of time synchronization, namely Active Set Conservative and Reactive Conservative. We provide a detailed analysis of their performance characteristics with respect to the current state of the art including both conservative and optimistic synchronization algorithms. In addition, we provide guidelines for selecting the appropriate synchronization algorithm based on the requirements of the co-simulation. The newly proposed algorithms are shown to achieve as much as 14% and 63% im- provement, respectively, over the existing conservative algorithm.

  8. New Algorithms for Computing the Time-to-Collision in Freeway Traffic Simulation Models

    PubMed Central

    Hou, Jia; List, George F.; Guo, Xiucheng

    2014-01-01

    Ways to estimate the time-to-collision are explored. In the context of traffic simulation models, classical lane-based notions of vehicle location are relaxed and new, fast, and efficient algorithms are examined. With trajectory conflicts being the main focus, computational procedures are explored which use a two-dimensional coordinate system to track the vehicle trajectories and assess conflicts. Vector-based kinematic variables are used to support the calculations. Algorithms based on boxes, circles, and ellipses are considered. Their performance is evaluated in the context of computational complexity and solution time. Results from these analyses suggest promise for effective and efficient analyses. A combined computation process is found to be very effective. PMID:25628650

  9. Efficient spectral and pseudospectral algorithms for 3D simulations of whistler-mode waves in a plasma

    NASA Astrophysics Data System (ADS)

    Gumerov, Nail A.; Karavaev, Alexey V.; Surjalal Sharma, A.; Shao, Xi; Papadopoulos, Konstantinos D.

    2011-04-01

    Efficient spectral and pseudospectral algorithms for simulation of linear and nonlinear 3D whistler waves in a cold electron plasma are developed. These algorithms are applied to the simulation of whistler waves generated by loop antennas and spheromak-like stationary waves of considerable amplitude. The algorithms are linearly stable and show good stability properties for computations of nonlinear waves over tens of thousands of time steps. Additional speedups by factors of 10-20 (comparing single core CPU and one GPU) are achieved by using graphics processors (GPUs), which enable efficient numerical simulation of the wave propagation on relatively high resolution meshes (tens of millions nodes) in personal computing environment. Comparisons of the numerical results with analytical solutions and experiments show good agreement. The limitations of the codes and the performance of the GPU computing are discussed.

  10. Parallel simulations of Grover's algorithm for closest match search in neutron monitor data

    NASA Astrophysics Data System (ADS)

    Kussainov, Arman; White, Yelena

    We are studying the parallel implementations of Grover's closest match search algorithm for neutron monitor data analysis. This includes data formatting, and matching quantum parameters to a conventional structure of a chosen programming language and selected experimental data type. We have employed several workload distribution models based on acquired data and search parameters. As a result of these simulations, we have an understanding of potential problems that may arise during configuration of real quantum computational devices and the way they could run tasks in parallel. The work was supported by the Science Committee of the Ministry of Science and Education of the Republic of Kazakhstan Grant #2532/GF3.

  11. Foam flooding reservoir simulation algorithm improvement and application

    NASA Astrophysics Data System (ADS)

    Wang, Yining; Wu, Xiaodong; Wang, Ruihe; Lai, Fengpeng; Zhang, Hanhan

    2014-05-01

    As one of the important enhanced oil recovery (EOR) technologies, Foam flooding is being used more and more widely in the oil field development. In order to describe and predict foam flooding, experts at domestic and abroad have established a number of mathematical models of foam flooding (mechanism, empirical and semi-empirical models). Empirical models require less data and apply conveniently, but the accuracy is not enough. The aggregate equilibrium model can describe foam generation, burst and coalescence by mechanism studying, but it is very difficult to accurately describe. The research considers the effects of critical water saturation, critical concentration of foaming agent and critical oil saturation on the sealing ability of foam and considers the effect of oil saturation on the resistance factor for obtaining the gas phase relative permeability and the results were amended by laboratory test, so the accuracy rate is higher. Through the reservoir development concepts simulation and field practical application, the calculation is more accurate and higher.

  12. An advanced dispatch simulator with advanced dispatch algorithm

    SciTech Connect

    Kafka, R.J.; Crim, H.G. Jr. ); Fink, L.H. ); Balu, N.J. . Electrical Systems Div.)

    1989-10-01

    This article describes the development of an automatic generation control algorithm, which is capable of using accurate real-time unit data and has control performance advantages over existing algorithms. Utilities use automatic generation control to match total generation and total laod at minimum cost. Since it is impractical to measure total load directly, it is determined from total generation for a control area, the total tie-line flow error, and a component proportional to the frequency error. One part of the AGC system then assigns this total generation requirement to all the generators in the control area in an economic manner by using an economic dispatch algorithm. Another part of the AGC system keeps track of each generator and attempts to correct individual unit errors and total system errors that can be caused by unit response problems, normal changes in system load, metering errors, and system disturbances.

  13. Fokker-Planck-DSMC algorithm for simulations of rarefied gas flows

    NASA Astrophysics Data System (ADS)

    Gorji, M. Hossein; Jenny, Patrick

    2015-04-01

    A Fokker-Planck based particle Monte Carlo algorithm was devised recently for simulations of rarefied gas flows by the authors [1-3]. The main motivation behind the Fokker-Planck (FP) model is computational efficiency, which could be gained due to the fact that the resulting stochastic processes are continuous in velocity space. This property of the model leads to simulations where the computational cost becomes independent of the Knudsen number (Kn) [3]. However, the Fokker-Planck model which can be seen as a diffusion approximation of the Boltzmann equation, becomes less accurate as Kn increases. In this study we propose a hybrid Fokker-Planck-Direct Simulation Monte Carlo (FP-DSMC) solution method, which is applicable for the whole range of Kn. The objective of this algorithm is to retain the efficiency of the FP scheme at low Kn (Kn ≪ 1) and to employ conventional DSMC at high Kn (Kn ≫ 1). Since the computational particles employed by the FP model represent the same data as in DSMC, the coupling between the two methods is straightforward. The new ingredient is a switching criterion which would ideally result in a hybrid scheme with the efficiency of the FP method and the accuracy of DSMC for the whole Kn-range. Here, we adopt the number of collisions in a given computational cell and for a given time step size as a decision criterion in order to switch between the FP model and DSMC. For assessment of the hybrid algorithm, different test cases including flow impingement and flow expansion through a slit were studied. Both accuracy and efficiency of the model are shown to be excellent for the presented test cases.

  14. The Research on Web-Based Testing Environment Using Simulated Annealing Algorithm

    PubMed Central

    2014-01-01

    The computerized evaluation is now one of the most important methods to diagnose learning; with the application of artificial intelligence techniques in the field of evaluation, the computerized adaptive testing gradually becomes one of the most important evaluation methods. In this test, the computer dynamic updates the learner's ability level and selects tailored items from the item pool. In order to meet the needs of the test it requires that the system has a relatively high efficiency of the implementation. To solve this problem, we proposed a novel method of web-based testing environment based on simulated annealing algorithm. In the development of the system, through a series of experiments, we compared the simulated annealing method and other methods of the efficiency and efficacy. The experimental results show that this method ensures choosing nearly optimal items from the item bank for learners, meeting a variety of assessment needs, being reliable, and having valid judgment in the ability of learners. In addition, using simulated annealing algorithm to solve the computing complexity of the system greatly improves the efficiency of select items from system and near-optimal solutions. PMID:24959600

  15. An algorithm for fast DNS cavitating flows simulations using homogeneous mixture approach

    NASA Astrophysics Data System (ADS)

    Žnidarčič, A.; Coutier-Delgosha, O.; Marquillie, M.; Dular, M.

    2015-12-01

    A new algorithm for fast DNS cavitating flows simulations is developed. The algorithm is based on Kim and Moin projection method form. Homogeneous mixture approach with transport equation for vapour volume fraction is used to model cavitation and various cavitation models can be used. Influence matrix and matrix diagonalisation technique enable fast parallel computations.

  16. DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM

    EPA Science Inventory

    The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...

  17. Optical simulation of quantum algorithms using programmable liquid-crystal displays

    SciTech Connect

    Puentes, Graciana; La Mela, Cecilia; Ledesma, Silvia; Iemmi, Claudio; Paz, Juan Pablo; Saraceno, Marcos

    2004-04-01

    We present a scheme to perform an all optical simulation of quantum algorithms and maps. The main components are lenses to efficiently implement the Fourier transform and programmable liquid-crystal displays to introduce space dependent phase changes on a classical optical beam. We show how to simulate Deutsch-Jozsa and Grover's quantum algorithms using essentially the same optical array programmed in two different ways.

  18. Piloted simulation of an algorithm for onboard control of time-optimal intercept

    NASA Technical Reports Server (NTRS)

    Price, D. B.; Calise, A. J.; Moerder, D. D.

    1985-01-01

    A piloted simulation of algorithms for onboard computation of trajectories for time-optimal intercept of a moving target by an F-8 aircraft is described. The algorithms, use singular perturbation techniques, generate commands in the cockpit. By centering the horizontal and vertical needles, the pilot flies an approximation to a time-optimal intercept trajectory. Example simulations are shown and statistical data on the pilot's performance when presented with different display and computation modes are described.

  19. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    PubMed Central

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  20. Transient dynamics simulations: Parallel algorithms for contact detection and smoothed particle hydrodynamics

    SciTech Connect

    Hendrickson, B.; Plimpton, S.; Attaway, S.; Swegle, J.

    1996-09-01

    Transient dynamics simulations are commonly used to model phenomena such as car crashes, underwater explosions, and the response of shipping containers to high-speed impacts. Physical objects in such a simulation are typically represented by Lagrangian meshes because the meshes can move and deform with the objects as they undergo stress. Fluids (gasoline, water) or fluid-like materials (earth) in the simulation can be modeled using the techniques of smoothed particle hydrodynamics. Implementing a hybrid mesh/particle model on a massively parallel computer poses several difficult challenges. One challenge is to simultaneously parallelize and load-balance both the mesh and particle portions of the computation. A second challenge is to efficiently detect the contacts that occur within the deforming mesh and between mesh elements and particles as the simulation proceeds. These contacts impart forces to the mesh elements and particles which must be computed at each timestep to accurately capture the physics of interest. In this paper we describe new parallel algorithms for smoothed particle hydrodynamics and contact detection which turn out to have several key features in common. Additionally, we describe how to join the new algorithms with traditional parallel finite element techniques to create an integrated particle/mesh transient dynamics simulation. Our approach to this problem differs from previous work in that we use three different parallel decompositions, a static one for the finite element analysis and dynamic ones for particles and for contact detection. We have implemented our ideas in a parallel version of the transient dynamics code PRONTO-3D and present results for the code running on a large Intel Paragon.

  1. Numerical simulation of particle fluxes formation generated as a result of space objects breakups in orbit

    NASA Astrophysics Data System (ADS)

    Aleksandrova, A. G.; Galushina, T. Yu.

    2015-12-01

    The paper describes the software package developed for the numerical simulation of the breakups of natural and artificial objects and algorithms on which it is based. A new software "Numerical model of breakups" includes models of collapse of the spacecraft (SC) as a result of the explosion and collision as well as two models of the explosion of an asteroid.

  2. Simulated annealing algorithm for solving chambering student-case assignment problem

    NASA Astrophysics Data System (ADS)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  3. Suite of finite element algorithms for accurate computation of soft tissue deformation for surgical simulation

    PubMed Central

    Joldes, Grand Roman; Wittek, Adam; Miller, Karol

    2008-01-01

    Real time computation of soft tissue deformation is important for the use of augmented reality devices and for providing haptic feedback during operation or surgeon training. This requires algorithms that are fast, accurate and can handle material nonlinearities and large deformations. A set of such algorithms is presented in this paper, starting with the finite element formulation and the integration scheme used and addressing common problems such as hourglass control and locking. The computation examples presented prove that by using these algorithms, real time computations become possible without sacrificing the accuracy of the results. For a brain model having more than 7000 degrees of freedom, we computed the reaction forces due to indentation with frequency of around 1000 Hz using a standard dual core PC. Similarly, we conducted simulation of brain shift using a model with more than 50 000 degrees of freedom in less than a minute. The speed benefits of our models results from combining the Total Lagrangian formulation with explicit time integration and low order finite elements. PMID:19152791

  4. Superspreading: molecular dynamics simulations and experimental results

    NASA Astrophysics Data System (ADS)

    Theodorakis, Panagiotis; Kovalchuk, Nina; Starov, Victor; Muller, Erich; Craster, Richard; Matar, Omar

    2015-11-01

    The intriguing ability of certain surfactant molecules to drive the superspreading of liquids to complete wetting on hydrophobic substrates is central to numerous applications that range from coating flow technology to enhanced oil recovery. Recently, we have observed that for superspreading to occur, two key conditions must be simultaneously satisfied: the adsorption of surfactants from the liquid-vapor surface onto the three-phase contact line augmented by local bilayer formation. Crucially, this must be coordinated with the rapid replenishment of liquid-vapor and solid-liquid interfaces with surfactants from the interior of the droplet. Here, we present the structural characteristics and kinetics of the droplet spreading during the different stages of this process, and we compare our results with experimental data for trisiloxane and poly oxy ethylene surfactants. In this way, we highlight and explore the differences between surfactants, paving the way for the design of molecular architectures tailored specifically for applications that rely on the control of wetting. EPSRC Platform Grant MACIPh (EP/L020564/).

  5. A Linac Simulation Code for Macro-Particles Tracking and Steering Algorithm Implementation

    SciTech Connect

    sun, yipeng

    2012-05-03

    In this paper, a linac simulation code written in Fortran90 is presented and several simulation examples are given. This code is optimized to implement linac alignment and steering algorithms, and evaluate the accelerator errors such as RF phase and acceleration gradient, quadrupole and BPM misalignment. It can track a single particle or a bunch of particles through normal linear accelerator elements such as quadrupole, RF cavity, dipole corrector and drift space. One-to-one steering algorithm and a global alignment (steering) algorithm are implemented in this code.

  6. Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation

    NASA Astrophysics Data System (ADS)

    Sakumoto, Yusuke; Ohsaki, Hiroyuki; Imase, Makoto

    In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M+N)log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M+N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(Mα(N)+N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.

  7. Experimental Results in the Comparison of Search Algorithms Used with Room Temperature Detectors

    SciTech Connect

    Guss, P., Yuan, D., Cutler, M., Beller, D.

    2010-11-01

    Analysis of time sequence data was run for several higher resolution scintillation detectors using a variety of search algorithms, and results were obtained in predicting the relative performance for these detectors, which included a slightly superior performance by CeBr{sub 3}. Analysis of several search algorithms shows that inclusion of the RSPRT methodology can improve sensitivity.

  8. State-dependent doubly weighted stochastic simulation algorithm for automatic characterization of stochastic biochemical rare events

    NASA Astrophysics Data System (ADS)

    Roh, Min K.; Daigle, Bernie J.; Gillespie, Dan T.; Petzold, Linda R.

    2011-12-01

    In recent years there has been substantial growth in the development of algorithms for characterizing rare events in stochastic biochemical systems. Two such algorithms, the state-dependent weighted stochastic simulation algorithm (swSSA) and the doubly weighted SSA (dwSSA) are extensions of the weighted SSA (wSSA) by H. Kuwahara and I. Mura [J. Chem. Phys. 129, 165101 (2008)], 10.1063/1.2987701. The swSSA substantially reduces estimator variance by implementing system state-dependent importance sampling (IS) parameters, but lacks an automatic parameter identification strategy. In contrast, the dwSSA provides for the automatic determination of state-independent IS parameters, thus it is inefficient for systems whose states vary widely in time. We present a novel modification of the dwSSA—the state-dependent doubly weighted SSA (sdwSSA)—that combines the strengths of the swSSA and the dwSSA without inheriting their weaknesses. The sdwSSA automatically computes state-dependent IS parameters via the multilevel cross-entropy method. We apply the method to three examples: a reversible isomerization process, a yeast polarization model, and a lac operon model. Our results demonstrate that the sdwSSA offers substantial improvements over previous methods in terms of both accuracy and efficiency.

  9. Developing a Moving-Solid Algorithm for Simulating Tsunamis Induced by Rock Sliding

    NASA Astrophysics Data System (ADS)

    Chuang, M.; Wu, T.; Huang, C.; Wang, C.; Chu, C.; Chen, M.

    2012-12-01

    The landslide generated tsunami is one of the most devastating nature hazards. However, the involvement of the moving obstacle and dynamic free-surface movement makes the numerical simulation a difficult task. To describe the fluid motion, we use modified two-step projection method to decouple the velocity and pressure fields with 3D LES turbulent model. The free-surface movement is tracked by volume of fluid (VOF) method (Wu, 2004). To describe the effect from the moving obstacle on the fluid, a newly developed moving-solid algorithm (MSA) is developed. We combine the ideas from immersed boundary method (IBM) and partial-cell treatment (PCT) for specifying the contacting speed on the solid face and for presenting the obstacle blocking effect, respectively. By using the concept of IBM, the cell-center and cell-face velocities can be specified arbitrarily. And because we move the solid obstacle on a fixed grid, the boundary of the solid seldom coincides with the cell faces, which makes it inappropriate to assign the solid boundary velocity to the cell faces. To overcome this problem, the PCT is adopted. Using this algorithm, the solid surface is conceptually coincided with the cell faces, and the cell face velocity is able to be specified as the obstacle velocity. The advantage of using this algorithm is obtaining the stable pressure field which is extremely important for coupling with a force-balancing model which describes the solid motion. This model is therefore able to simulate incompressible high-speed fluid motion. In order to describe the solid motion, the DEM (Discrete Element Method) is adopted. The new-time solid movement can be predicted and divided into translation and rotation based on the Newton's equations and Euler's equations respectively. The detail of the moving-solid algorithm is presented in this paper. This model is then applied to studying the rock-slide generated tsunami. The results are validated with the laboratory data (Liu and Wu, 2005

  10. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    NASA Astrophysics Data System (ADS)

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2015-06-01

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  11. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    SciTech Connect

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  12. Reconstruction of the vertical electron density profile based on vertical TEC using the simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Chunhua; Yang, Guobin; Zhu, Peng; Nishioka, Michi; Yokoyama, Tatsuhiro; Zhou, Chen; Song, Huan; Lan, Ting; Zhao, Zhengyu; Zhang, Yuannong

    2016-05-01

    This paper presents a new method to reconstruct the vertical electron density profile based on vertical Total Electron Content (TEC) using the simulated annealing algorithm. The present technique used the Quasi-parabolic segments (QPS) to model the bottomside ionosphere. The initial parameters of the ionosphere model were determined from both International Reference Ionosphere (IRI) (Bilitza et al., 2014) and vertical TEC (vTEC). Then, the simulated annealing algorithm was used to search the best-fit parameters of the ionosphere model by comparing with the GPS-TEC. The performance and robust of this technique were verified by ionosonde data. The critical frequency (foF2) and peak height (hmF2) of the F2 layer obtained from ionograms recorded at different locations and on different days were compared with those calculated by the proposed method. The analysis of results shows that the present method is inspiring for obtaining foF2 from vTEC. However, the accuracy of hmF2 needs to be improved in the future work.

  13. Numerical stability of relativistic beam multidimensional PIC simulations employing the Esirkepov algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc

    2013-09-01

    Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.

  14. An efficient algorithm for the stochastic simulation of the hybridization of DNA to microarrays

    PubMed Central

    2009-01-01

    Background Although oligonucleotide microarray technology is ubiquitous in genomic research, reproducibility and standardization of expression measurements still concern many researchers. Cross-hybridization between microarray probes and non-target ssDNA has been implicated as a primary factor in sensitivity and selectivity loss. Since hybridization is a chemical process, it may be modeled at a population-level using a combination of material balance equations and thermodynamics. However, the hybridization reaction network may be exceptionally large for commercial arrays, which often possess at least one reporter per transcript. Quantification of the kinetics and equilibrium of exceptionally large chemical systems of this type is numerically infeasible with customary approaches. Results In this paper, we present a robust and computationally efficient algorithm for the simulation of hybridization processes underlying microarray assays. Our method may be utilized to identify the extent to which nucleic acid targets (e.g. cDNA) will cross-hybridize with probes, and by extension, characterize probe robustnessusing the information specified by MAGE-TAB. Using this algorithm, we characterize cross-hybridization in a modified commercial microarray assay. Conclusions By integrating stochastic simulation with thermodynamic prediction tools for DNA hybridization, one may robustly and rapidly characterize of the selectivity of a proposed microarray design at the probe and "system" levels. Our code is available at http://www.laurenzi.net. PMID:20003312

  15. Physics and Algorithm Enhancements for a Validated MCNP/X Monte Carlo Simulation Tool, Phase VII

    SciTech Connect

    McKinney, Gregg W

    2012-07-17

    Currently the US lacks an end-to-end (i.e., source-to-detector) radiation transport simulation code with predictive capability for the broad range of DHS nuclear material detection applications. For example, gaps in the physics, along with inadequate analysis algorithms, make it difficult for Monte Carlo simulations to provide a comprehensive evaluation, design, and optimization of proposed interrogation systems. With the development and implementation of several key physics and algorithm enhancements, along with needed improvements in evaluated data and benchmark measurements, the MCNP/X Monte Carlo codes will provide designers, operators, and systems analysts with a validated tool for developing state-of-the-art active and passive detection systems. This project is currently in its seventh year (Phase VII). This presentation will review thirty enhancements that have been implemented in MCNPX over the last 3 years and were included in the 2011 release of version 2.7.0. These improvements include 12 physics enhancements, 4 source enhancements, 8 tally enhancements, and 6 other enhancements. Examples and results will be provided for each of these features. The presentation will also discuss the eight enhancements that will be migrated into MCNP6 over the upcoming year.

  16. Blind decorrelation and deconvolution algorithm for multiple-input multiple-output system: II. Analysis and simulation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ching; Yu, Tommy; Yao, Kung; Pottie, Gregory J.

    1999-11-01

    For single-input multiple-output (SIMO) systems blind deconvolution based on second-order statistics has been shown promising given that the sources and channels meet certain assumptions. In our previous paper we extend the work to multiple-input multiple-output (MIMO) systems by introducing a blind deconvolution algorithm to remove all channel dispersion followed by a blind decorrelation algorithm to separate different sources from their instantaneous mixture. In this paper we first explore more details embedded in our algorithm. Then we present simulation results to show that our algorithm is applicable to MIMO systems excited by a broad class of signals such as speech, music and digitally modulated symbols.

  17. A fast algorithm for the simulation of arterial pulse waves

    NASA Astrophysics Data System (ADS)

    Du, Tao; Hu, Dan; Cai, David

    2016-06-01

    One-dimensional models have been widely used in studies of the propagation of blood pulse waves in large arterial trees. Under a periodic driving of the heartbeat, traditional numerical methods, such as the Lax-Wendroff method, are employed to obtain asymptotic periodic solutions at large times. However, these methods are severely constrained by the CFL condition due to large pulse wave speed. In this work, we develop a new numerical algorithm to overcome this constraint. First, we reformulate the model system of pulse wave propagation using a set of Riemann variables and derive a new form of boundary conditions at the inlet, the outlets, and the bifurcation points of the arterial tree. The new form of the boundary conditions enables us to design a convergent iterative method to enforce the boundary conditions. Then, after exchanging the spatial and temporal coordinates of the model system, we apply the Lax-Wendroff method in the exchanged coordinate system, which turns the large pulse wave speed from a liability to a benefit, to solve the wave equation in each artery of the model arterial system. Our numerical studies show that our new algorithm is stable and can perform ∼15 times faster than the traditional implementation of the Lax-Wendroff method under the requirement that the relative numerical error of blood pressure be smaller than one percent, which is much smaller than the modeling error.

  18. Microwave holography of large reflector antennas - Simulation algorithms

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1985-01-01

    The performance of large reflector antennas can be improved by identifying the location and amount of their surface distortions and correcting them. To determine the accuracy of the constructed surface profiles, simulation studies are used to incorporate both the effects of systematic and random distortions, particularly the effects of the displaced surface panels. In this paper, different simulation models are investigated, emphasizing a model based on the vector diffraction analysis of a curved reflector with displaced panels. The simulated far-field patterns are then used to reconstruct the location and amount of displacement of the surface panels by employing a fast Fourier transform/iterative procedure. The sensitivity of the microwave holography technique based on the number of far-field sampled points, level of distortions, polarizations, illumination tapers, etc., is also examined.

  19. A super-resolution algorithm for enhancement of flash lidar data: flight test results

    NASA Astrophysics Data System (ADS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse, Robert

    2013-03-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  20. A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert

    2014-01-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  1. A process-based algorithm for simulating terraces in SWAT

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Terraces in crop fields are one of the most important soil and water conservation measures that affect runoff and erosion processes in a watershed. In large hydrological programs such as the Soil and Water Assessment Tool (SWAT), terrace effects are simulated by adjusting the slope length and the US...

  2. Simulating Multivariate Nonnormal Data Using an Iterative Algorithm

    ERIC Educational Resources Information Center

    Ruscio, John; Kaczetow, Walter

    2008-01-01

    Simulating multivariate nonnormal data with specified correlation matrices is difficult. One especially popular method is Vale and Maurelli's (1983) extension of Fleishman's (1978) polynomial transformation technique to multivariate applications. This requires the specification of distributional moments and the calculation of an intermediate…

  3. Multiobjective optimization with a modified simulated annealing algorithm for external beam radiotherapy treatment planning

    SciTech Connect

    Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel

    2006-12-15

    Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented.

  4. An extended molecular statics algorithm simulating the electromechanical continuum response of ferroelectric materials

    NASA Astrophysics Data System (ADS)

    Endres, F.; Steinmann, P.

    2014-12-01

    Molecular dynamics (MD) simulations of ferroelectric materials have improved tremendously over the last few decades. Specifically, the core-shell model has been commonly used for the simulation of ferroelectric materials such as barium titanate. However, due to the computational costs of MD, the calculation of ferroelectric hysteresis behaviour, and especially the stress-strain relation, has been a computationally intense task. In this work a molecular statics algorithm, similar to a finite element method for nonlinear trusses, has been implemented. From this, an algorithm to calculate the stress dependent continuum deformation of a discrete particle system, such as a ferroelectric crystal, has been devised. Molecular statics algorithms for the atomistic simulation of ferroelectric materials have been previously described. However, in contrast to the prior literature the algorithm proposed in this work is also capable of effectively computing the macroscopic ferroelectric butterfly hysteresis behaviour. Therefore the advocated algorithm is able to calculate the piezoelectric effect as well as the converse piezoelectric effect simultaneously on atomistic and continuum length scales. Barium titanate has been simulated using the core-shell model to validate the developed algorithm.

  5. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Technical Reports Server (NTRS)

    Barrie, A.; Adrian, Mark L.; Yeh, P.-S.; Winkert, G. E.; Lobell, J. V.; Vinas, A.F.; Simpson, D. J.; Moore, T. E.

    2008-01-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eights (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6 deg x 180 deg fields-of-view (FOV) are set 90 deg apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45 deg x 180 deg fan about its nominal viewing (0 deg deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the results in the DES complement of a given spacecraft generating 6.5-Mbs(exp -1) of electron data while the DIS generates 1.1-Mbs(exp -1) of ion data yielding an FPI total data rate of 6.6-MBs(exp -1). The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mbs(exp -1). Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re-processed Cluster/PEACE electron measurements. Topics to be discussed include: review of compression algorithm; data quality

  6. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Astrophysics Data System (ADS)

    Barrie, A.; Adrian, M. L.; Yeh, P.; Winkert, G.; Lobell, J.; Vinas, A. F.; Simpson, D. G.

    2009-12-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° x 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° x 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 6.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present updated simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data as well as the FPI-DIS ion data. Compression analysis is based upon a seed of re-processed Cluster

  7. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Astrophysics Data System (ADS)

    Barrie, A. C.; Adrian, M. L.; Yeh, P.; Winkert, G. E.; Lobell, J. V.; Viňas, A. F.; Simpson, D. G.; Moore, T. E.

    2008-12-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° × 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° × 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 7.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm- based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re- processed Cluster/PEACE electron measurements. Topics to be

  8. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with

  9. Stochastic algorithms for the analysis of numerical flame simulations

    SciTech Connect

    Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.

    2004-04-26

    Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.

  10. Parallel algorithms for simulating continuous time Markov chains

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  11. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    (erosion, landslide monitoring, etc) and we then tested the use of filtering techniques using 3D moving windows along the space and time, which considerably reduces data scattering due to the benefits of data redundancy. In conclusion, the simulator allowed us to improve our different algorithms and to understand how instrumental error affects final results. And also, improve the methodology of scans acquisition to find the best compromise between point density, positioning and acquisition time with the best accuracy possible to characterize the topographic change.

  12. Registration of range data using a hybrid simulated annealing and iterative closest point algorithm

    SciTech Connect

    LUCK,JASON; LITTLE,CHARLES Q.; HOFF,WILLIAM

    2000-04-17

    The need to register data is abundant in applications such as: world modeling, part inspection and manufacturing, object recognition, pose estimation, robotic navigation, and reverse engineering. Registration occurs by aligning the regions that are common to multiple images. The largest difficulty in performing this registration is dealing with outliers and local minima while remaining efficient. A commonly used technique, iterative closest point, is efficient but is unable to deal with outliers or avoid local minima. Another commonly used optimization algorithm, simulated annealing, is effective at dealing with local minima but is very slow. Therefore, the algorithm developed in this paper is a hybrid algorithm that combines the speed of iterative closest point with the robustness of simulated annealing. Additionally, a robust error function is incorporated to deal with outliers. This algorithm is incorporated into a complete modeling system that inputs two sets of range data, registers the sets, and outputs a composite model.

  13. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    NASA Astrophysics Data System (ADS)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-05-01

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relative to traditional schemes. Subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.

  14. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations

    PubMed Central

    Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji

    2015-01-01

    GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008

  15. Algorithm for calculating turbine cooling flow and the resulting decrease in turbine efficiency

    NASA Technical Reports Server (NTRS)

    Gauntner, J. W.

    1980-01-01

    An algorithm is presented for calculating both the quantity of compressor bleed flow required to cool the turbine and the decrease in turbine efficiency caused by the injection of cooling air into the gas stream. The algorithm, which is intended for an axial flow, air routine in a properly written thermodynamic cycle code. Ten different cooling configurations are available for each row of cooled airfoils in the turbine. Results from the algorithm are substantiated by comparison with flows predicted by major engine manufacturers for given bulk metal temperatures and given cooling configurations. A list of definitions for the terms in the subroutine is presented.

  16. Grover search algorithm with Rydberg-blockaded atoms: quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Petrosyan, David; Saffman, Mark; Mølmer, Klaus

    2016-05-01

    We consider the Grover search algorithm implementation for a quantum register of size N={2}k using k (or k+1) microwave- and laser-driven Rydberg-blockaded atoms, following the proposal by Mølmer et al (2011 J. Phys. B 44 184016). We suggest some simplifications for the microwave and laser couplings, and analyze the performance of the algorithm for up to k = 4 multilevel atoms under realistic experimental conditions using quantum stochastic (Monte Carlo) wavefunction simulations.

  17. A syncopated leap-frog algorithm for orbit consistent plasma simulation of materials processing reactors

    SciTech Connect

    Cobb, J.W.; Leboeuf, J.N.

    1994-10-01

    The authors present a particle algorithm to extend simulation capabilities for plasma based materials processing reactors. The orbit integrator uses a syncopated leap-frog algorithm in cylindrical coordinates, which maintains second order accuracy, and minimizes computational complexity. Plasma source terms are accumulated orbit consistently directly in the frequency and azimuthal mode domains. Finally they discuss the numerical analysis of this algorithm. Orbit consistency greatly reduces the computational cost for a given level of precision. The computational cost is independent of the degree of time scale separation.

  18. Shuttle Entry Air Data System (SEADS) - Optimization of preflight algorithms based on flight results

    NASA Technical Reports Server (NTRS)

    Wolf, H.; Henry, M. W.; Siemers, Paul M., III

    1988-01-01

    The SEADS pressure model algorithm results were tested against other sources of air data, in particular, the Shuttle Best Estimated Trajectory (BET). The algorithm basis was also tested through a comparison of flight-measured pressure distribution vs the wind tunnel database. It is concluded that the successful flight of SEADS and the subsequent analysis of the data shows good agreement between BET and SEADS air data.

  19. Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms

    SciTech Connect

    Bosl, W J

    2005-01-26

    The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis

  20. Monte Carlo simulation using the PENELOPE code with an ant colony algorithm to study MOSFET detectors.

    PubMed

    Carvajal, M A; García-Pareja, S; Guirado, D; Vilches, M; Anguiano, M; Palma, A J; Lallena, A M

    2009-10-21

    In this work we have developed a simulation tool, based on the PENELOPE code, to study the response of MOSFET devices to irradiation with high-energy photons. The energy deposited in the extremely thin silicon dioxide layer has been calculated. To reduce the statistical uncertainties, an ant colony algorithm has been implemented to drive the application of splitting and Russian roulette as variance reduction techniques. In this way, the uncertainty has been reduced by a factor of approximately 5, while the efficiency is increased by a factor of above 20. As an application, we have studied the dependence of the response of the pMOS transistor 3N163, used as a dosimeter, with the incidence angle of the radiation for three common photons sources used in radiotherapy: a (60)Co Theratron-780 and the 6 and 18 MV beams produced by a Mevatron KDS LINAC. Experimental and simulated results have been obtained for gantry angles of 0 degrees, 15 degrees, 30 degrees, 45 degrees, 60 degrees and 75 degrees. The agreement obtained has permitted validation of the simulation tool. We have studied how to reduce the angular dependence of the MOSFET response by using an additional encapsulation made of brass in the case of the two LINAC qualities considered. PMID:19794247

  1. A sweep algorithm for massively parallel simulation of circuit-switched networks

    NASA Technical Reports Server (NTRS)

    Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.

    1992-01-01

    A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.

  2. Multiscale stochastic simulation algorithm with stochastic partial equilibrium assumption for chemically reacting systems

    SciTech Connect

    Cao Yang . E-mail: ycao@cs.ucsb.edu; Gillespie, Dan . E-mail: GillespieDT@mailaps.org; Petzold, Linda . E-mail: petzold@engineering.ucsb.edu

    2005-07-01

    In this paper, we introduce a multiscale stochastic simulation algorithm (MSSA) which makes use of Gillespie's stochastic simulation algorithm (SSA) together with a new stochastic formulation of the partial equilibrium assumption (PEA). This method is much more efficient than SSA alone. It works even with a very small population of fast species. Implementation details are discussed, and an application to the modeling of the heat shock response of E. Coli is presented which demonstrates the excellent efficiency and accuracy obtained with the new method.

  3. The Moment Condensed History Algorithm for Monte Carlo Electron Transport Simulations

    SciTech Connect

    Tolar, D R; Larsen, E W

    2001-02-27

    We introduce a new Condensed History algorithm for the Monte Carlo simulation of electron transport. To obtain more accurate simulations, the new algorithm preserves the mean position and the variance in the mean position exactly for electrons that have traveled a given path length and are traveling in a given direction. This is accomplished by deriving the zeroth-, first-, and second-order spatial moments of the Spencer-Lewis equation and employing this information directly in the Condensed History process. Numerical calculations demonstrate the advantages of our method over standard Condensed History methods.

  4. Improving Efficiency in SMD Simulations Through a Hybrid Differential Relaxation Algorithm.

    PubMed

    Ramírez, Claudia L; Zeida, Ari; Jara, Gabriel E; Roitberg, Adrián E; Martí, Marcelo A

    2014-10-14

    The fundamental object for studying a (bio)chemical reaction obtained from simulations is the free energy profile, which can be directly related to experimentally determined properties. Although quite accurate hybrid quantum (DFT based)-classical methods are available, achieving statistically accurate and well converged results at a moderate computational cost is still an open challenge. Here, we present and thoroughly test a hybrid differential relaxation algorithm (HyDRA), which allows faster equilibration of the classical environment during the nonequilibrium steering of a (bio)chemical reaction. We show and discuss why (in the context of Jarzynski's Relationship) this method allows obtaining accurate free energy profiles with smaller number of independent trajectories and/or faster pulling speeds, thus reducing the overall computational cost. Moreover, due to the availability and straightforward implementation of the method, we expect that it will foster theoretical studies of key enzymatic processes. PMID:26588154

  5. Algorithmic Extensions of Low-Dispersion Scheme and Modeling Effects for Acoustic Wave Simulation. Revised

    NASA Technical Reports Server (NTRS)

    Kaushik, Dinesh K.; Baysal, Oktay

    1997-01-01

    Accurate computation of acoustic wave propagation may be more efficiently performed when their dispersion relations are considered. Consequently, computational algorithms which attempt to preserve these relations have been gaining popularity in recent years. In the present paper, the extensions to one such scheme are discussed. By solving the linearized, 2-D Euler and Navier-Stokes equations with such a method for the acoustic wave propagation, several issues were investigated. Among them were higher-order accuracy, choice of boundary conditions and differencing stencils, effects of viscosity, low-storage time integration, generalized curvilinear coordinates, periodic series, their reflections and interference patterns from a flat wall and scattering from a circular cylinder. The results were found to be promising en route to the aeroacoustic simulations of realistic engineering problems.

  6. Hybrid-PIC Algorithms for Simulation of Large-Scale Plasma Jet Accelerators

    NASA Astrophysics Data System (ADS)

    Thoma, Carsten; Welch, Dale

    2009-11-01

    Merging coaxial plasma jets are envisioned for use in magneto-inertial fusion schemes as the source of an imploding plasma liner. An experimental program at HyperV is considering the generation of large plasma jets (length scales on the order of centimeters) at high densities (10^16-10^17 cm-3) in long coaxial accelerators. We describe the Hybrid particle-in-cell (PIC) methods implemented in the code LSP for this parameter regime and present simulation results of the HyperV accelerator. A radiation transport algorithm has also been implemented into LSP so that the effect of radiation cooling on the jet mach number can be included self-consistently into the Hybrid PIC formalism.

  7. [The utility boiler low NOx combustion optimization based on ANN and simulated annealing algorithm].

    PubMed

    Zhou, Hao; Qian, Xinping; Zheng, Ligang; Weng, Anxin; Cen, Kefa

    2003-11-01

    With the developing restrict environmental protection demand, more attention was paid on the low NOx combustion optimizing technology for its cheap and easy property. In this work, field experiments on the NOx emissions characteristics of a 600 MW coal-fired boiler were carried out, on the base of the artificial neural network (ANN) modeling, the simulated annealing (SA) algorithm was employed to optimize the boiler combustion to achieve a low NOx emissions concentration, and the combustion scheme was obtained. Two sets of SA parameters were adopted to find a better SA scheme, the result show that the parameters of T0 = 50 K, alpha = 0.6 can lead to a better optimizing process. This work can give the foundation of the boiler low NOx combustion on-line control technology. PMID:14768567

  8. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms

    PubMed Central

    2014-01-01

    Background Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. Results We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. Conclusions In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to

  9. Physical formulation and numerical algorithm for simulating N immiscible incompressible fluids involving general order parameters

    SciTech Connect

    Dong, S.

    2015-02-15

    We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.

  10. A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study.

    PubMed

    Kalpathy-Cramer, Jayashree; Zhao, Binsheng; Goldgof, Dmitry; Gu, Yuhua; Wang, Xingwei; Yang, Hao; Tan, Yongqiang; Gillies, Robert; Napel, Sandy

    2016-08-01

    Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 μl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies. PMID:26847203

  11. Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda; Pemmaraju, Surya

    1992-01-01

    Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.

  12. A JFNK-based implicit moment algorithm for self-consistent, multi-scale, plasma simulation

    NASA Astrophysics Data System (ADS)

    Knoll, Dana; Taitano, William; Chacon, Luis

    2010-11-01

    Jacobian-Free-Newton-Krylov method (JFNK) is an advanced non-linear algorithm that allows solution to a coupled systems of non-linear equations [1]. In [2] we have put forward a JFNK-based implicit, consistent, time integration algorithm and demonstrated it's ability to efficiently step over electron time scales, while retaining electron kinetic effects on the ion time scale. Here we extend this work by investigating a JFNK- based implicit-moments approach for the purpose of consistent scale-bridging between the fluid description and kinetic description in order to resolve the transition region. Our preliminary results, based on a reformulated Poisson's equation (RPE) [3], allows solution to the Vlasov-Poisson system for varying grid resolutions. In the limit of local coarse grid size (grid spacing large compared to Debye length), the RPE represents an electric field based on the moment system, while in the limit of local grid spacing resolving the Debye length, the RPE represents an electric field based on the standard Poisson equation. The technique allows smooth transition between the two regimes, consistently, in one simulation. [1] D.A. Knoll and D.E. Keyes,J. Comput. Phys., vol. 193 (2004) [2] W.T. Taitano, Masters Thesis, Nuclear Engineering, University of Idaho (2010) [3] R. Belaouar, N.Crouseilles and P. Degond,J. Sci. Comput., vol. 41 (2009)

  13. A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.

    PubMed

    Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel

    2015-03-01

    Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions. PMID:25056743

  14. A novel coupling of noise reduction algorithms for particle flow simulations

    NASA Astrophysics Data System (ADS)

    Zimoń, M. J.; Reese, J. M.; Emerson, D. R.

    2016-09-01

    Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particle data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.

  15. New Cirrus Retrieval Algorithms and Results from eMAS during SEAC4RS

    NASA Astrophysics Data System (ADS)

    Holz, R.; Platnick, S. E.; Meyer, K.; Wang, C.; Wind, G.; Arnold, T.; King, M. D.; Yorks, J. E.; McGill, M. J.

    2014-12-01

    The enhanced MODIS Airborne Simulator (eMAS) scanning imager was flown on the ER-2 during the SEAC4RS field campaign. The imager provides measurements in 38 spectral channels from the visible into the 13μm CO2 absorption bands at approximately 25 m nadir spatial resolution at cirrus altitudes, and with a swath width of about 18 km, provided substantial context and synergy for other ER-2 cirrus observations. The eMAS is an update to the original MAS scanner, having new midwave and IR spectrometers coupled with the previous VNIR/SWIR spectrometers. In addition to the standard MODIS-like cloud retrieval algorithm (MOD06/MYD06 for MODIS Terra/Aqua, respectively) that provides cirrus optical thickness (COT) and effective particle radius (CER) from several channel combinations, three new algorithms were developed to take advantage of unique aspects of eMAS and/or other ER-2 observations. The first uses a combination of two solar reflectance channels within the 1.88 μm water vapor absorption band, each with significantly different single scattering albedo, allowing for simultaneous COT and CER retrievals. The advantage of this algorithm is that the strong water vapor absorption can significantly reduce the sensitivity to lower level clouds and ocean/land surface properties thus better isolating cirrus properties. A second algorithm uses a suite of infrared channels in an optimal estimation algorithm to simultaneously retrieve COT, CER, and cloud-top pressure/temperature. Finally, a window IR algorithm is used to retrieve COT in synergy with the ER-2 Cloud Physics Lidar (CPL) cloud top/base boundary measurements. Using a variety of quantifiable error sources, uncertainties for all eMAS retrievals will be shown along with comparisons with CPL COT retrievals.

  16. Quantum algorithms for spin models and simulable gate sets for quantum computation

    NASA Astrophysics Data System (ADS)

    van den Nest, M.; Dür, W.; Raussendorf, R.; Briegel, H. J.

    2009-11-01

    We present simple mappings between classical lattice models and quantum circuits, which provide a systematic formalism to obtain quantum algorithms to approximate partition functions of lattice models in certain complex-parameter regimes. We, e.g., present an efficient quantum algorithm for the six-vertex model as well as a two-dimensional Ising-type model. We show that classically simulating these (complex-parameter) spin models is as hard as simulating universal quantum computation, i.e., BQP complete (BQP denotes bounded-error quantum polynomial time). Furthermore, our mappings provide a framework to obtain efficiently simulable quantum gate sets from exactly solvable classical models. We, e.g., show that the simulability of Valiant’s match gates can be recovered by using the solvability of the free-fermion eight-vertex model.

  17. Simulation of agronomic images for an automatic evaluation of crop/ weed discrimination algorithm accuracy

    NASA Astrophysics Data System (ADS)

    Jones, G.; Gée, Ch.; Truchetet, F.

    2007-01-01

    In the context of precision agriculture, we present a robust and automatic method based on simulated images for evaluating the efficiency of any crop/weed discrimination algorithms for a inter-row weed infestation rate. To simulate these images two different steps are required: 1) modeling of a crop field from the spatial distribution of plants (crop and weed) 2) projection of the created field through an optical system to simulate photographing. Then an application is proposed investigating the accuracy and robustness of crop/weed discrimination algorithm combining a line detection (Hough transform) and a plant discrimination (crop and weeds). The accuracy of weed infestation rate estimate for each image is calculated by direct comparison to the initial weed infestation rate of the simulated images. It reveals an performance better than 85%.

  18. Comparative Study of Algorithms for the Numerical Simulation of Lattice QCD

    SciTech Connect

    Luz, Fernando H. P.; Mendes, Tereza

    2010-11-12

    Large-scale numerical simulations are the prime method for a nonperturbative study of QCD from first principles. Although the lattice simulation of the pure-gauge (or quenched-QCD) case may be performed very efficiently on parallel machines, there are several additional difficulties in the simulation of the full-QCD case, i.e. when dynamical quark effects are taken into account. We discuss the main aspects of full-QCD simulations, describing the most common algorithms. We present a comparative analysis of performance for two versions of the hybrid Monte Carlo method (the so-called R and RHMC algorithms), as provided in the MILC software package. We consider two degenerate flavors of light quarks in the staggered formulation, having in mind the case of finite-temperature QCD.

  19. A fast sorting algorithm for a hypersonic rarefied flow particle simulation on the connection machine

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1989-01-01

    The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.

  20. Tracking Microstructure of Crystalline Materials: A Post-Processing Algorithm for Atomistic Simulations

    NASA Astrophysics Data System (ADS)

    Panzarino, Jason F.; Rupert, Timothy J.

    2014-03-01

    Atomistic simulations have become a powerful tool in materials research due to the extremely fine spatial and temporal resolution provided by such techniques. To understand the fundamental principles that govern material behavior at the atomic scale and directly connect to experimental works, it is necessary to quantify the microstructure of materials simulated with atomistics. Specifically, quantitative tools for identifying crystallites, their crystallographic orientation, and overall sample texture do not currently exist. Here, we develop a post-processing algorithm capable of characterizing such features, while also documenting their evolution during a simulation. In addition, the data is presented in a way that parallels the visualization methods used in traditional experimental techniques. The utility of this algorithm is illustrated by analyzing several types of simulation cells that are commonly found in the atomistic modeling literature but could also be applied to a variety of other atomistic studies that require precise identification and tracking of microstructure.

  1. A new second-order integration algorithm for simulating mechanical dynamic systems

    NASA Technical Reports Server (NTRS)

    Howe, R. M.

    1989-01-01

    A new integration algorithm which has the simplicity of Euler integration but exhibits second-order accuracy is described. In fixed-step numerical integration of differential equations for mechanical dynamic systems the method represents displacement and acceleration variables at integer step times and velocity variables at half-integer step times. Asymptotic accuracy of the algorithm is twice that of trapezoidal integration and ten times that of second-order Adams-Bashforth integration. The algorithm is also compatible with real-time inputs when used for a real-time simulation. It can be used to produce simulation outputs at double the integration frame rate, i.e., at both half-integer and integer frame times, even though it requires only one evaluation of state-variable derivatives per integration step. The new algorithm is shown to be especially effective in the simulation of lightly-damped structural modes. Both time-domain and frequency-domain accuracy comparisons with traditional integration methods are presented. Stability of the new algorithm is also examined.

  2. Hierarchical tree algorithm for collisional N-body simulations on GRAPE

    NASA Astrophysics Data System (ADS)

    Fukushige, Toshiyuki; Kawai, Atsushi

    2016-06-01

    We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional N-body simulations, running on the GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such a combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all the particle data and also the tree node data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on the host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct N2 summation in the original Hermite scheme. For example, we can achieve about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of an N = 106 system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme.

  3. Hierarchical tree algorithm for collisional N-body simulations on GRAPE

    NASA Astrophysics Data System (ADS)

    Fukushige, Toshiyuki; Kawai, Atsushi

    2016-03-01

    We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional N-body simulations, running on the GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such a combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all the particle data and also the tree node data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on the host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct N2 summation in the original Hermite scheme. For example, we can achieve about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of an N = 106 system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme.

  4. Navier-Stokes simulation of wind-tunnel flow using LU-ADI factorization algorithm

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru; Fujii, Kozo; Gavali, Sharad

    1988-01-01

    The three dimensional Navier-Stokes solution code using the LU-ADI factorization algorithm was employed to simulate the workshop test cases of transonic flow past a wing model in a wind tunnel and in free air. The effect of the tunnel walls is well demonstrated by the present simulations. An Amdahl 1200 supercomputer having 128 Mbytes main memory was used for these computations.

  5. Simulation of a navigator algorithm for a low-cost GPS receiver

    NASA Technical Reports Server (NTRS)

    Hodge, W. F.

    1980-01-01

    The analytical structure of an existing navigator algorithm for a low cost global positioning system receiver is described in detail to facilitate its implementation on in-house digital computers and real-time simulators. The material presented includes a simulation of GPS pseudorange measurements, based on a two-body representation of the NAVSTAR spacecraft orbits, and a four component model of the receiver bias errors. A simpler test for loss of pseudorange measurements due to spacecraft shielding is also noted.

  6. Flight test results of failure detection and isolation algorithms for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Motyka, P. R.; Bailey, M. L.

    1990-01-01

    Flight test results for two sensor fault-tolerant algorithms developed for a redundant strapdown inertial measurement unit are presented. The inertial measurement unit (IMU) consists of four two-degrees-of-freedom gyros and accelerometers mounted on the faces of a semi-octahedron. Fault tolerance is provided by edge vector test and generalized likelihood test algorithms, each of which can provide dual fail-operational capability for the IMU. To detect the wide range of failure magnitudes in inertial sensors, which provide flight crucial information for flight control and navigation, failure detection and isolation are developed in terms of a multi level structure. Threshold compensation techniques, developed to enhance the sensitivity of the failure detection process to navigation level failures, are presented. Four flight tests were conducted in a commercial transport-type environment to compare and determine the performance of the failure detection and isolation methods. Dual flight processors enabled concurrent tests for the algorithms. Failure signals such as hard-over, null, or bias shift, were added to the sensor outputs as simple or multiple failures during the flights. Both algorithms provided timely detection and isolation of flight control level failures. The generalized likelihood test algorithm provided more timely detection of low-level sensor failures, but it produced one false isolation. Both algorithms demonstrated the capability to provide dual fail-operational performance for the skewed array of inertial sensors.

  7. A Physics-based Algorithm for Real-time Simulation of Electrosurgery Procedures in Minimally Invasive Surgery

    PubMed Central

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F.; De, Suvranu

    2014-01-01

    Background High-frequency electricity is used in a majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. Methods We present a real-time and physically realistic simulation of electrosurgery, by modeling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide sub-finite-element graphical rendering of vaporized tissue, a dual mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. Results We have demonstrated our physics based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Conclusions Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. PMID:24357156

  8. Simulation of plasma turbulence in scrape-off layer conditions: the GBS code, simulation results and code validation

    NASA Astrophysics Data System (ADS)

    Ricci, P.; Halpern, F. D.; Jolliet, S.; Loizu, J.; Mosetto, A.; Fasoli, A.; Furno, I.; Theiler, C.

    2012-12-01

    Based on the drift-reduced Braginskii equations, the Global Braginskii Solver, GBS, is able to model the scrape-off layer (SOL) plasma turbulence in terms of the interplay between the plasma outflow from the tokamak core, the turbulent transport, and the losses at the vessel. Model equations, the GBS numerical algorithm, and GBS simulation results are described. GBS has been first developed to model turbulence in basic plasma physics devices, such as linear and simple magnetized toroidal devices, which contain some of the main elements of SOL turbulence in a simplified setting. In this paper we summarize the findings obtained from the simulation carried out in these configurations and we report the first simulations of SOL turbulence. We also discuss the validation project that has been carried out together with the GBS development.

  9. A novel algorithm for non-bonded-list updating in molecular simulations.

    PubMed

    Maximova, Tatiana; Keasar, Chen

    2006-06-01

    Simulations of molecular systems typically handle interactions within non-bonded pairs. Generating and updating a list of these pairs can be the most time-consuming part of energy calculations for large systems. Thus, efficient non-bonded list processing can speed up the energy calculations significantly. While the asymptotic complexity of current algorithms (namely O(N), where N is the number of particles) is probably the lowest possible, a wide space for optimization is still left. This article offers a heuristic extension to the previously suggested grid based algorithms. We show that, when the average particle movements are slow, simulation time can be reduced considerably. The proposed algorithm has been implemented in the DistanceMatrix class of the molecular modeling package MESHI. MESHI is freely available at . PMID:16796550

  10. Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm

    SciTech Connect

    Thanh, Vo Hong; Priami, Corrado

    2015-08-07

    We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reaction rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.

  11. Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm

    NASA Astrophysics Data System (ADS)

    Feng, B.; Huang, C.; Decyk, V.; Mori, W. B.; Katsouleas, T.; Muggli, P.

    2006-11-01

    Modeling long timescale propagation of beams in plasma wakefield accelerators at the energy frontier and in electron clouds in circular accelerators such as CERN-LHC requires faster and more efficient simulation codes. Simply increasing the number of processors does not scale beyond one-fifth of the number of cells in the decomposition direction. A pipelining algorithm applied on fully parallelized code QuickPIC is suggested to overcome this limit. The pipelining algorithm uses many groups of processors and optimizes the job allocation on the processors in parallel computing. With the new algorithm, it is possible to use on the order of 102 groups of processors, expanding the scale and speed of simulations with QuickPIC by a similar factor.

  12. Enhancing plasma wakefield and e-cloud simulation performance using a pipelining algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Bing; Katsouleas, Tom; Huang, Chengkun; Decyk, Viktor; Mori, Warren B.

    2006-10-01

    Modeling long timescale propagation of beams in plasma wakefield accelerators at the energy frontier and in electron clouds in circular accelerators such as CERN-LHC require a faster and more efficient simulation code. Simply increasing the number of processors does not scale beyond one-fifth of the number of cells in the decomposition direction. A pipelining algorithm applied on fully parallel code QUICKPIC is suggested to overcome this limit. The pipelining algorithm uses many groups of processors and optimizes the job allocation on the processors in parallel computing. With the new algorithm, it is possible to use on the order of 100 groups of processors, expanding the scale and speed of simulations with QuickPIC by a similar factor.

  13. Simulating the time-dependent Schr"odinger equation with a quantum lattice-gas algorithm

    NASA Astrophysics Data System (ADS)

    Prezkuta, Zachary; Coffey, Mark

    2007-03-01

    Quantum computing algorithms promise remarkable improvements in speed or memory for certain applications. Currently, the Type II (or hybrid) quantum computer is the most feasible to build. This consists of a large number of small Type I (pure) quantum computers that compute with quantum logic, but communicate with nearest neighbors in a classical way. The arrangement thus formed is suitable for computations that execute a quantum lattice gas algorithm (QLGA). We report QLGA simulations for both the linear and nonlinear time-dependent Schr"odinger equation. These evidence the stable, efficient, and at least second order convergent properties of the algorithm. The simulation capability provides a computational tool for applications in nonlinear optics, superconducting and superfluid materials, Bose-Einstein condensates, and elsewhere.

  14. Golfing with protons: using research grade simulation algorithms for online games

    NASA Astrophysics Data System (ADS)

    Harold, J.

    2004-12-01

    Scientists have long known the power of simulations. By modeling a system in a computer, researchers can experiment at will, developing an intuitive sense of how a system behaves. The rapid increase in the power of personal computers, combined with technologies such as Flash, Shockwave and Java, allow us to bring research simulations into the education world by creating exploratory environments for the public. This approach is illustrated by a project funded by a small grant from NSF's Informal Science Education program, through an opportunity that provides education supplements to existing research awards. Using techniques adapted from a magnetospheric research program, several Flash based interactives have been developed that allow web site visitors to explore the motion of particles in the Earth's magnetosphere. These pieces were folded into a larger Space Weather Center web project at the Space Science Institute (www.spaceweathercenter.org). Rather than presenting these interactives as plasma simulations per se, the research algorithms were used to create games such as "Magneto Mini Golf", where the balls are protons moving in combined electric and magnetic fields. The "holes" increase in complexity, beginning with no fields and progressing towards a simple model of Earth's magnetosphere. The emphasis of the activity is gameplay, but because it is at its core a plasma simulation, the user develops an intuitive sense of charged particle motion as they progress. Meanwhile, the pieces contain embedded assessments that are measurable through a database driven tracking system. Mining that database not only provides helpful usability information, but allows us to examine whether users are meeting the learning goals of the activities. We will discuss the development and evaluation results of the project, as well as the potential for these types of activities to shift the expectations of what a web site can and should provide educationally.

  15. SEREN - a new SPH code for star and planet formation simulations. Algorithms and tests

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Batty, C. P.; McLeod, A.; Whitworth, A. P.

    2011-05-01

    We present SEREN, a new hybrid Smoothed Particle Hydrodynamics and N-body code designed to simulate astrophysical processes such as star and planet formation. It is written in Fortran 95/2003 and has been parallelised using OpenMP. SEREN is designed in a flexible, modular style, thereby allowing a large number of options to be selected or disabled easily and without compromising performance. SEREN uses the conservative "grad-h" formulation of SPH, but can easily be configured to use traditional SPH or Godunov SPH. Thermal physics is treated either with a barotropic equation of state, or by solving the energy equation and modelling the transport of cooling radiation. A Barnes-Hut tree is used to obtain neighbour lists and compute gravitational accelerations efficiently, and an hierarchical time-stepping scheme is used to reduce the number of computations per timestep. Dense gravitationally bound objects are replaced by sink particles, to allow the simulation to be evolved longer, and to facilitate the identification of protostars and the compilation of stellar and binary properties. At the termination of a hydrodynamical simulation, SEREN has the option of switching to a pure N-body simulation, using a 4th-order Hermite integrator, and following the ballistic evolution of the sink particles (e.g. to determine the final binary statistics once a star cluster has relaxed). We describe in detail all the algorithms implemented in SEREN and we present the results of a suite of tests designed to demonstrate the fidelity of SEREN and its performance and scalability. Further information and additional tests of SEREN can be found at the web-page http://www.astro.group.shef.ac.uk/seren.

  16. Computer simulation results of attitude estimation of earth orbiting satellites

    NASA Technical Reports Server (NTRS)

    Kou, S. R.

    1976-01-01

    Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.

  17. SIMULATION OF AEROSOL DYNAMICS: A COMPARATIVE REVIEW OF ALGORITHMS USED IN AIR QUALITY MODELS

    EPA Science Inventory

    A comparative review of algorithms currently used in air quality models to simulate aerosol dynamics is presented. This review addresses coagulation, condensational growth, nucleation, and gas/particle mass transfer. Two major approaches are used in air quality models to repres...

  18. A three-dimensional spectral algorithm for simulations of transition and turbulence

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Hussaini, M. Y.

    1985-01-01

    A spectral algorithm for simulating three dimensional, incompressible, parallel shear flows is described. It applies to the channel, to the parallel boundary layer, and to other shear flows with one wall bounded and two periodic directions. Representative applications to the channel and to the heated boundary layer are presented.

  19. An evaluation of the performance of the soil temperature simulation algorithms used in the PRZM model.

    PubMed

    Tsiros, I X; Dimopoulos, I F

    2007-04-01

    Soil temperature simulation is an important component in environmental modeling since it is involved in several aspects of pollutant transport and fate. This paper deals with the performance of the soil temperature simulation algorithms of the well-known environmental model PRZM. Model results are compared and evaluated based on the basis of its ability to predict in situ measured soil temperature profiles in an experimental plot during a 3-year monitoring study. The evaluation of the performance is based on linear regression statistics and typical model statistical errors such as the root mean square error (RMSE) and the normalized objective function (NOF). Results show that the model required minimal calibration to match the observed response of the system. Values of the determination coefficient R(2) were found to be in all cases around the value of 0.98 indicating a very good agreement between measured and simulated data. Values of the RMSE were found to be in the range of 1.2 to 1.4 degrees C, 1.1 to 1.4 degrees C, 0.9 to 1.1 degrees C, and 0.8 to 1.1 degrees C, for the examined 2, 5, 10 and 20 cm soil depths, respectively. Sensitivity analyses were also performed to investigate the influence of various factors involved in the energy balance equation at the ground surface on the soil temperature profiles. The results showed that the model was able to represent important processes affecting the soil temperature regime such as the combined effect of the heat transfer by convection between the ground surface and the atmosphere and the latent heat flux due to soil water evaporation. PMID:17454373

  20. Forward-Masked Frequency Selectivity Improvements in Simulated and Actual Cochlear Implant Users Using a Preprocessing Algorithm.

    PubMed

    Langner, Florian; Jürgens, Tim

    2016-01-01

    Frequency selectivity can be quantified using masking paradigms, such as psychophysical tuning curves (PTCs). Normal-hearing (NH) listeners show sharp PTCs that are level- and frequency-dependent, whereas frequency selectivity is strongly reduced in cochlear implant (CI) users. This study aims at (a) assessing individual shapes of PTCs in CI users, (b) comparing these shapes to those of simulated CI listeners (NH listeners hearing through a CI simulation), and (c) increasing the sharpness of PTCs using a biologically inspired dynamic compression algorithm, BioAid, which has been shown to sharpen the PTC shape in hearing-impaired listeners. A three-alternative-forced-choice forward-masking technique was used to assess PTCs in 8 CI users (with their own speech processor) and 11 NH listeners (with and without listening through a vocoder to simulate electric hearing). CI users showed flat PTCs with large interindividual variability in shape, whereas simulated CI listeners had PTCs of the same average flatness, but more homogeneous shapes across listeners. The algorithm BioAid was used to process the stimuli before entering the CI users' speech processor or the vocoder simulation. This algorithm was able to partially restore frequency selectivity in both groups, particularly in seven out of eight CI users, meaning significantly sharper PTCs than in the unprocessed condition. The results indicate that algorithms can improve the large-scale sharpness of frequency selectivity in some CI users. This finding may be useful for the design of sound coding strategies particularly for situations in which high frequency selectivity is desired, such as for music perception. PMID:27604785

  1. A combined Event-Driven/Time-Driven molecular dynamics algorithm for the simulation of shock waves in rarefied gases

    SciTech Connect

    Valentini, Paolo Schwartzentruber, Thomas E.

    2009-12-10

    A novel combined Event-Driven/Time-Driven (ED/TD) algorithm to speed-up the Molecular Dynamics simulation of rarefied gases using realistic spherically symmetric soft potentials is presented. Due to the low density regime, the proposed method correctly identifies the time that must elapse before the next interaction occurs, similarly to Event-Driven Molecular Dynamics. However, each interaction is treated using Time-Driven Molecular Dynamics, thereby integrating Newton's Second Law using the sufficiently small time step needed to correctly resolve the atomic motion. Although infrequent, many-body interactions are also accounted for with a small approximation. The combined ED/TD method is shown to correctly reproduce translational relaxation in argon, described using the Lennard-Jones potential. For densities between {rho}=10{sup -4}kg/m{sup 3} and {rho}=10{sup -1}kg/m{sup 3}, comparisons with kinetic theory, Direct Simulation Monte Carlo, and pure Time-Driven Molecular Dynamics demonstrate that the ED/TD algorithm correctly reproduces the proper collision rates and the evolution toward thermal equilibrium. Finally, the combined ED/TD algorithm is applied to the simulation of a Mach 9 shock wave in rarefied argon. Density and temperature profiles as well as molecular velocity distributions accurately match DSMC results, and the shock thickness is within the experimental uncertainty. For the problems considered, the ED/TD algorithm ranged from several hundred to several thousand times faster than conventional Time-Driven MD. Moreover, the force calculation to integrate the molecular trajectories is found to contribute a negligible amount to the overall ED/TD simulation time. Therefore, this method could pave the way for the application of much more refined and expensive interatomic potentials, either classical or first-principles, to Molecular Dynamics simulations of shock waves in rarefied gases, involving vibrational nonequilibrium and chemical reactivity.

  2. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results.

    PubMed

    Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-09-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  3. Structural optimization and segregation behavior of quaternary alloy nanoparticles based on simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Xin-Ze, Lu; Gui-Fang, Shao; Liang-You, Xu; Tun-Dong, Liu; Yu-Hua, Wen

    2016-05-01

    Alloy nanoparticles exhibit higher catalytic activity than monometallic nanoparticles, and their stable structures are of importance to their applications. We employ the simulated annealing algorithm to systematically explore the stable structure and segregation behavior of tetrahexahedral Pt–Pd–Cu–Au quaternary alloy nanoparticles. Three alloy nanoparticles consisting of 443 atoms, 1417 atoms, and 3285 atoms are considered and compared. The preferred positions of atoms in the nanoparticles are analyzed. The simulation results reveal that Cu and Au atoms tend to occupy the surface, Pt atoms preferentially occupy the middle layers, and Pd atoms tend to segregate to the inner layers. Furthermore, Au atoms present stronger surface segregation than Cu ones. This study provides a fundamental understanding on the structural features and segregation phenomena of multi-metallic nanoparticles. Project supported by the National Natural Science Foundation of China (Grant Nos. 51271156, 11474234, and 61403318) and the Natural Science Foundation of Fujian Province of China (Grant Nos. 2013J01255 and 2013J06002).

  4. Comparative Evaluation of Registration Algorithms in Different Brain Databases With Varying Difficulty: Results and Insights

    PubMed Central

    Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos

    2015-01-01

    Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685

  5. Quantum algorithm for simulating the dynamics of an open quantum system

    SciTech Connect

    Wang Hefeng; Ashhab, S.; Nori, Franco

    2011-06-15

    In the study of open quantum systems, one typically obtains the decoherence dynamics by solving a master equation. The master equation is derived using knowledge of some basic properties of the system, the environment, and their interaction: One basically needs to know the operators through which the system couples to the environment and the spectral density of the environment. For a large system, it could become prohibitively difficult to even write down the appropriate master equation, let alone solve it on a classical computer. In this paper, we present a quantum algorithm for simulating the dynamics of an open quantum system. On a quantum computer, the environment can be simulated using ancilla qubits with properly chosen single-qubit frequencies and with properly designed coupling to the system qubits. The parameters used in the simulation are easily derived from the parameters of the system + environment Hamiltonian. The algorithm is designed to simulate Markovian dynamics, but it can also be used to simulate non-Markovian dynamics provided that this dynamics can be obtained by embedding the system of interest into a larger system that obeys Markovian dynamics. We estimate the resource requirements for the algorithm. In particular, we show that for sufficiently slow decoherence a single ancilla qubit could be sufficient to represent the entire environment, in principle.

  6. Algorithm for Building a Spectrum for NREL's One-Sun Multi-Source Simulator: Preprint

    SciTech Connect

    Moriarty, T.; Emery, K.; Jablonski, J.

    2012-06-01

    Historically, the tools used at NREL to compensate for the difference between a reference spectrum and a simulator spectrum have been well-matched reference cells and the application of a calculated spectral mismatch correction factor, M. This paper describes the algorithm for adjusting the spectrum of a 9-channel fiber-optic-based solar simulator with a uniform beam size of 9 cm square at 1-sun. The combination of this algorithm and the One-Sun Multi-Source Simulator (OSMSS) hardware reduces NREL's current vs. voltage measurement time for a typical three-junction device from man-days to man-minutes. These time savings may be significantly greater for devices with more junctions.

  7. Space-based Doppler lidar sampling strategies: Algorithm development and simulated observation experiments

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D.; Wood, S. A.; Morris, M.

    1990-01-01

    Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.

  8. Aerosol kinetic code "AERFORM": Model, validation and simulation results

    NASA Astrophysics Data System (ADS)

    Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.

    2016-06-01

    The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.

  9. Experimental and simulational result multipactors in 112 MHz QWR injector

    SciTech Connect

    Xin, T.; Ben-Zvi, I.; Belomestnykh, S.; Brutus, J. C.; Skaritka, J.; Wu, Q.; Xiao, B.

    2015-05-03

    The first RF commissioning of 112 MHz QWR superconducting electron gun was done in late 2014. The coaxial Fundamental Power Coupler (FPC) and Cathode Stalk (stalk) were installed and tested for the first time. During this experiment, we observed several multipacting barriers at different gun voltage levels. The simulation work was done within the same range. The comparison between the experimental observation and the simulation results are presented in this paper. The observations during the test are consisted with the simulation predictions. We were able to overcome most of the multipacting barriers and reach 1.8 MV gun voltage under pulsed mode after several round of conditioning processes.

  10. Algorithms for detecting antibodies to HIV-1: results from a rural Ugandan cohort.

    PubMed

    Nunn, A J; Biryahwaho, B; Downing, R G; van der Groen, G; Ojwiya, A; Mulder, D W

    1993-08-01

    Although the Western blot test is widely used to confirm HIV-1 serostatus, concerns over its additional cost have prompted review of the need for supplementary testing and the evaluation of alternative test algorithms. Serostatus tends to be confirmed with this additional test especially when tested individuals will be informed of their serostatus or when results will be used for research purposes. The confirmation procedure has been adopted as a means of securing suitably high levels of specificity and sensitivity. With the goal of exploring potential alternatives to Western blot confirmation, the authors describe the use of parallel testing with a competitive and an indirect enzyme immunoassay with and without supplementary Western blots. Sera were obtained from 7895 people in the rural population survey and tested with an algorithm based on the Recombigen HIV-1 EIA and Wellcozyme HIV-1 Recombinant; alternative algorithms were assessed on negative or confirmed positive sera. None of the 227 sera classified as negative by the 2 assays were positive by Western blot. Of the 192 identified ass positive by both assays, 4 were found to be seronegative with Western blot. The possibility of technical error does, however, exist for 3 of these latter cases. One of the alternative algorithms assessed classified all borderline or discordant assay results as negative with 100% specificity and 98.4% sensitivity. This particular algorithm costs only one-third the price of the conventional algorithm. These results therefore suggest that high specificity and sensitivity may be obtained without using Western blot and at a considerable reduction in cost. PMID:8397940

  11. A general algorithm for magnetic resonance imaging simulation: a versatile tool to collect information about imaging artefacts and new acquisition techniques.

    PubMed

    Placidi, Giuseppe; Alecci, Marcello; Sotgiu, Antonello

    2002-01-01

    An innovative algorithm for Magnetic Resonance Imaging (MRI) capable of demonstrating the source of various artefacts and driving the hardware and software acquisition process is presented. The algorithm is based on the application of the Bloch equations to the magnetization vector of each point of the simulated object, as requested by the instructions of the MRI pulse sequence. The collected raw data are then used to reconstruct the image of the object. The general structure of the algorithm makes it possible to simulate a great range of imaging situations in order to explain the nature of unwanted artefacts and to study new acquisition techniques. The way the algorithm structures the sequence has also allowed the easy implementation of MRI data acquisition on a commercial general-purpose DSP-based data acquisition board, thus facilitating the comparison between simulated and experimental results. PMID:15460653

  12. A Monolithic Algorithm for High Reynolds Number Fluid-Structure Interaction Simulations

    NASA Astrophysics Data System (ADS)

    Lieberknecht, Erika; Sheldon, Jason; Pitt, Jonathan

    2013-11-01

    Simulations of fluid-structure interaction problems with high Reynolds number flows are typically approached with partitioned algorithms that leverage the robustness of traditional finite volume method based CFD techniques for flows of this nature. However, such partitioned algorithms are subject to many sub-iterations per simulation time-step, which substantially increases the computational cost when a tightly coupled solution is desired. To address this issue, we present a finite element method based monolithic algorithm for fluid-structure interaction problems with high Reynolds number flow. The use of a monolithic algorithm will potentially reduce the computational cost during each time-step, but requires that all of the governing equations be simultaneously cast in a single Arbitrary Lagrangian-Eulerian (ALE) frame of reference and subjected to the same discretization strategy. The formulation for the fluid solution is stabilized by implementing a Streamline Upwind Galerkin (SUPG) method, and a projection method for equal order interpolation of all of the solution unknowns; numerical and programming details are discussed. Preliminary convergence studies and numerical investigations are presented, to demonstrate the algorithm's robustness and performance. The authors acknowledge support for this project from the Applied Research Laboratory Eric Walker Graduate Fellowship Program.

  13. Simulation and experimental design of a new advanced variable step size Incremental Conductance MPPT algorithm for PV systems.

    PubMed

    Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir

    2016-05-01

    Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. PMID:26337741

  14. A Systematic Evaluation of Feature Selection and Classification Algorithms Using Simulated and Real miRNA Sequencing Data.

    PubMed

    Yang, Sheng; Guo, Li; Shao, Fang; Zhao, Yang; Chen, Feng

    2015-01-01

    Sequencing is widely used to discover associations between microRNAs (miRNAs) and diseases. However, the negative binomial distribution (NB) and high dimensionality of data obtained using sequencing can lead to low-power results and low reproducibility. Several statistical learning algorithms have been proposed to address sequencing data, and although evaluation of these methods is essential, such studies are relatively rare. The performance of seven feature selection (FS) algorithms, including baySeq, DESeq, edgeR, the rank sum test, lasso, particle swarm optimistic decision tree, and random forest (RF), was compared by simulation under different conditions based on the difference of the mean, the dispersion parameter of the NB, and the signal to noise ratio. Real data were used to evaluate the performance of RF, logistic regression, and support vector machine. Based on the simulation and real data, we discuss the behaviour of the FS and classification algorithms. The Apriori algorithm identified frequent item sets (mir-133a, mir-133b, mir-183, mir-937, and mir-96) from among the deregulated miRNAs of six datasets from The Cancer Genomics Atlas. Taking these findings altogether and considering computational memory requirements, we propose a strategy that combines edgeR and DESeq for large sample sizes. PMID:26508990

  15. A Systematic Evaluation of Feature Selection and Classification Algorithms Using Simulated and Real miRNA Sequencing Data

    PubMed Central

    Yang, Sheng; Guo, Li; Shao, Fang; Zhao, Yang; Chen, Feng

    2015-01-01

    Sequencing is widely used to discover associations between microRNAs (miRNAs) and diseases. However, the negative binomial distribution (NB) and high dimensionality of data obtained using sequencing can lead to low-power results and low reproducibility. Several statistical learning algorithms have been proposed to address sequencing data, and although evaluation of these methods is essential, such studies are relatively rare. The performance of seven feature selection (FS) algorithms, including baySeq, DESeq, edgeR, the rank sum test, lasso, particle swarm optimistic decision tree, and random forest (RF), was compared by simulation under different conditions based on the difference of the mean, the dispersion parameter of the NB, and the signal to noise ratio. Real data were used to evaluate the performance of RF, logistic regression, and support vector machine. Based on the simulation and real data, we discuss the behaviour of the FS and classification algorithms. The Apriori algorithm identified frequent item sets (mir-133a, mir-133b, mir-183, mir-937, and mir-96) from among the deregulated miRNAs of six datasets from The Cancer Genomics Atlas. Taking these findings altogether and considering computational memory requirements, we propose a strategy that combines edgeR and DESeq for large sample sizes. PMID:26508990

  16. Preliminary Results from SCEC Earthquake Simulator Comparison Project

    NASA Astrophysics Data System (ADS)

    Tullis, T. E.; Barall, M.; Richards-Dinger, K. B.; Ward, S. N.; Heien, E.; Zielke, O.; Pollitz, F. F.; Dieterich, J. H.; Rundle, J. B.; Yikilmaz, M. B.; Turcotte, D. L.; Kellogg, L. H.; Field, E. H.

    2010-12-01

    Earthquake simulators are computer programs that simulate long sequences of earthquakes. If such simulators could be shown to produce synthetic earthquake histories that are good approximations to actual earthquake histories they could be of great value in helping to anticipate the probabilities of future earthquakes and so could play an important role in helping to make public policy decisions. Consequently it is important to discover how realistic are the earthquake histories that result from these simulators. One way to do this is to compare their behavior with the limited knowledge we have from the instrumental, historic, and paleoseismic records of past earthquakes. Another, but slow process for large events, is to use them to make predictions about future earthquake occurrence and to evaluate how well the predictions match what occurs. A final approach is to compare the results of many varied earthquake simulators to determine the extent to which the results depend on the details of the approaches and assumptions made by each simulator. Five independently developed simulators, capable of running simulations on complicated geometries containing multiple faults, are in use by some of the authors of this abstract. Although similar in their overall purpose and design, these simulators differ from one another widely in their details in many important ways. They require as input for each fault element a value for the average slip rate as well as a value for friction parameters or stress reduction due to slip. They share the use of the boundary element method to compute stress transfer between elements. None use dynamic stress transfer by seismic waves. A notable difference is the assumption different simulators make about the constitutive properties of the faults. The earthquake simulator comparison project is designed to allow comparisons among the simulators and between the simulators and past earthquake history. The project uses sets of increasingly detailed

  17. Image Artifacts Resulting from Gamma-Ray Tracking Algorithms Used with Compton Imagers

    SciTech Connect

    Seifert, Carolyn E.; He, Zhong

    2005-10-01

    For Compton imaging it is necessary to determine the sequence of gamma-ray interactions in a single detector or array of detectors. This can be done by time-of-flight measurements if the interactions are sufficiently far apart. However, in small detectors the time between interactions can be too small to measure, and other means of gamma-ray sequencing must be used. In this work, several popular sequencing algorithms are reviewed for sequences with two observed events and three or more observed events in the detector. These algorithms can result in poor imaging resolution and introduce artifacts in the backprojection images. The effects of gamma-ray tracking algorithms on Compton imaging are explored in the context of the 4π Compton imager built by the University of Michigan.

  18. A Computational Algorithm to Produce Virtual X-ray and Electron Diffraction Patterns from Atomistic Simulations

    NASA Astrophysics Data System (ADS)

    Coleman, Shawn P.; Sichani, Mehrdad M.; Spearot, Douglas E.

    2014-03-01

    Electron and x-ray diffraction are well-established experimental methods used to explore the atomic scale structure of materials. In this work, a computational algorithm is developed to produce virtual electron and x-ray diffraction patterns directly from atomistic simulations. This algorithm advances beyond previous virtual diffraction methods by using a high-resolution mesh of reciprocal space that eliminates the need for a priori knowledge of the crystal structure being modeled or other assumptions concerning the diffraction conditions. At each point on the reciprocal space mesh, the diffraction intensity is computed via explicit computation of the structure factor equation. To construct virtual selected-area electron diffraction patterns, a hemispherical slice of the reciprocal lattice mesh lying on the surface of the Ewald sphere is isolated and viewed along a specified zone axis. X-ray diffraction line profiles are created by binning the intensity of each reciprocal lattice point by its associated scattering angle, effectively mimicking powder diffraction conditions. The virtual diffraction algorithm is sufficiently generic to be applied to atomistic simulations of any atomic species. In this article, the capability and versatility of the virtual diffraction algorithm is exhibited by presenting findings from atomistic simulations of <100> symmetric tilt Ni grain boundaries, nanocrystalline Cu models, and a heterogeneous interface formed between α-Al2O3 (0001) and γ-Al2O3 (111).

  19. Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS

    NASA Astrophysics Data System (ADS)

    Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.

    Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.

  20. Circuit model of the ITER-like antenna for JET and simulation of its control algorithms

    SciTech Connect

    Durodié, Frédéric Křivská, Alena; Helou, Walid; Collaboration: EUROfusion Consortium

    2015-12-10

    The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and

  1. Circuit model of the ITER-like antenna for JET and simulation of its control algorithms

    NASA Astrophysics Data System (ADS)

    Durodié, Frédéric; Dumortier, Pierre; Helou, Walid; Křivská, Alena; Lerche, Ernesto

    2015-12-01

    The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. At the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and

  2. Simulation and experimental tests on active mass damper control system based on Model Reference Adaptive Control algorithm

    NASA Astrophysics Data System (ADS)

    Tu, Jianwei; Lin, Xiaofeng; Tu, Bo; Xu, Jiayun; Tan, Dongmei

    2014-09-01

    In the process of sudden natural disasters (such as earthquake or typhoon), the active mass damper (AMD) system can reduce the structural vibration response optimally, which serves as a frequently applied but less mature vibration-reducing technology in wind and earthquake resistance of high-rise buildings. As the core of this technology, the selection of control algorithm is extremely challenging due to the uncertainty of structural parameters and the randomness of external loads. It is not necessary for the Model Reference Adaptive Control (MRAC) based on the Minimal Controller Synthesis (MCS) algorithm to know in advance the structural parameters, which produces special advantages in conditions of real-time change of system parameters, uncertain external disturbance, and the nonlinear dynamic system. This paper studies the application of the MRAC into the AMD active control system. The principle of MRAC algorithm is recommended and the dynamic model and the motion differential equation of AMD system based on MRAC is established under seismic excitation. The simulation analysis for linear and nonlinear structures when the structural stiffness is degenerated is performed under AMD system controlled by MRAC algorithm. To verify the validity of the MRAC over the AMD system, experimental tests are carried out on a linear structure and a structure with variable stiffness with the AMD system under seismic excitation on the shake table, and the experimental results are compared with those of the traditional pole assignment control algorithm.

  3. DSMC moving-boundary algorithms for simulating MEMS geometries with opening and closing gaps.

    SciTech Connect

    Gallis, Michail A.; Rader, Daniel John; Torczynski, John Robert

    2010-06-01

    Moving-boundary algorithms for the Direct Simulation Monte Carlo (DSMC) method are investigated for a microbeam that moves toward and away from a parallel substrate. The simpler but analogous one-dimensional situation of a piston moving between two parallel walls is investigated using two moving-boundary algorithms. In the first, molecules are reflected rigorously from the moving piston by performing the reflections in the piston frame of reference. In the second, molecules are reflected approximately from the moving piston by moving the piston and subsequently moving all molecules and reflecting them from the moving piston at its new or old position.

  4. Hyper-X Stage Separation: Simulation Development and Results

    NASA Technical Reports Server (NTRS)

    Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.

    2001-01-01

    This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.

  5. Advancements of in-flight mass moment of inertia and structural deflection algorithms for satellite attitude simulators

    NASA Astrophysics Data System (ADS)

    Wright, Jonathan W.

    Experimental satellite attitude simulators have long been used to test and analyze control algorithms in order to drive down risk before implementation on an operational satellite. Ideally, the dynamic response of a terrestrial-based experimental satellite attitude simulator would be similar to that of an on-orbit satellite. Unfortunately, gravitational disturbance torques and poorly characterized moments of inertia introduce uncertainty into the system dynamics leading to questionable attitude control algorithm experimental results. This research consists of three distinct, but related contributions to the field of developing robust satellite attitude simulators. In the first part of this research, existing approaches to estimate mass moments and products of inertia are evaluated followed by a proposition and evaluation of a new approach that increases both the accuracy and precision of these estimates using typical on-board satellite sensors. Next, in order to better simulate the micro-torque environment of space, a new approach to mass balancing satellite attitude simulator is presented, experimentally evaluated, and verified. Finally, in the third area of research, we capitalize on the platform improvements to analyze a control moment gyroscope (CMG) singularity avoidance steering law. Several successful experiments were conducted with the CMG array at near-singular configurations. An evaluation process was implemented to verify that the platform remained near the desired test momentum, showing that the first two components of this research were effective in allowing us to conduct singularity avoidance experiments in a representative space-like test environment.

  6. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    SciTech Connect

    Thompson, Aidan P.; Schultz, Peter A.; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen M.; Tucker, Garritt J.

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled %22Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations.%22 During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel

  7. Algorithm for simulation of quantum many-body dynamics using dynamical coarse-graining

    NASA Astrophysics Data System (ADS)

    Khasin, M.; Kosloff, R.

    2010-04-01

    An algorithm for simulation of quantum many-body dynamics having su(2) spectrum-generating algebra is developed. The algorithm is based on the idea of dynamical coarse-graining. The original unitary dynamics of the target observables—the elements of the spectrum-generating algebra—is simulated by a surrogate open-system dynamics, which can be interpreted as weak measurement of the target observables, performed on the evolving system. The open-system state can be represented by a mixture of pure states, localized in the phase space. The localization reduces the scaling of the computational resources with the Hilbert-space dimension n by factor n3/2(lnn)-1 compared to conventional sparse-matrix methods. The guidelines for the choice of parameters for the simulation are presented and the scaling of the computational resources with the Hilbert-space dimension of the system is estimated. The algorithm is applied to the simulation of the dynamics of systems of 2×104 and 2×106 cold atoms in a double-well trap, described by the two-site Bose-Hubbard model.

  8. Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.

    PubMed

    Marquez-Lago, Tatiana T; Burrage, Kevin

    2007-09-14

    In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity. PMID:17867731

  9. Evaluation and optimization of lidar temperature analysis algorithms using simulated data

    NASA Technical Reports Server (NTRS)

    Leblanc, Thierry; McDermid, I. Stuart; Hauchecorne, Alain; Keckhut, Philippe

    1998-01-01

    The middle atmosphere (20 to 90 km altitude) ha received increasing interest from the scientific community during the last decades, especially since such problems as polar ozone depletion and climatic change have become so important. Temperature profiles have been obtained in this region using a variety of satellite-, rocket-, and balloon-borne instruments as well as some ground-based systems. One of the more promising of these instruments, especially for long-term high resolution measurements, is the lidar. Measurements of laser radiation Rayleigh backscattered, or Raman scattered, by atmospheric air molecules can be used to determine the relative air density profile and subsequently the temperature profile if it is assumed that the atmosphere is in hydrostatic equilibrium and follows the ideal gas law. The high vertical and spatial resolution make the lidar a well adapted instrument for the study of many middle atmospheric processes and phenomena as well as for the evaluation and validation of temperature measurements from satellites, such as the Upper Atmosphere Research Satellite (UARS). In the Network for Detection of Stratospheric Change (NDSC) lidar is the core instrument for measuring middle atmosphere temperature profiles. Using the best lidar analysis algorithm possible is therefore of crucial importance. In this work, the JPL and CNRS/SA lidar analysis software were evaluated. The results of this evaluation allowed the programs to be corrected and optimized and new production software versions were produced. First, a brief description of the lidar technique and the method used to simulate lidar raw-data profiles from a given temperature profile is presented. Evaluation and optimization of the JPL and CNRS/SA algorithms are then discussed.

  10. Use of a simulated annealing algorithm to fit compartmental models with an application to fractal pharmacokinetics.

    PubMed

    Marsh, Rebeccah E; Riauka, Terence A; McQuarrie, Steve A

    2007-01-01

    Increasingly, fractals are being incorporated into pharmacokinetic models to describe transport and chemical kinetic processes occurring in confined and heterogeneous spaces. However, fractal compartmental models lead to differential equations with power-law time-dependent kinetic rate coefficients that currently are not accommodated by common commercial software programs. This paper describes a parameter optimization method for fitting individual pharmacokinetic curves based on a simulated annealing (SA) algorithm, which always converged towards the global minimum and was independent of the initial parameter values and parameter bounds. In a comparison using a classical compartmental model, similar fits by the Gauss-Newton and Nelder-Mead simplex algorithms required stringent initial estimates and ranges for the model parameters. The SA algorithm is ideal for fitting a wide variety of pharmacokinetic models to clinical data, especially those for which there is weak prior knowledge of the parameter values, such as the fractal models. PMID:17706176

  11. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    SciTech Connect

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-21

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the potential development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a liquid metal cooled reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  12. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-01

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a SNAP derivative reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  13. A comparison of various algorithms to extract Magic Formula tyre model coefficients for vehicle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.

    2015-02-01

    Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.

  14. Initial Evaluations of LoC Prediction Algorithms Using the NASA Vertical Motion Simulator

    NASA Technical Reports Server (NTRS)

    Krishnakumar, Kalmanje; Stepanyan, Vahram; Barlow, Jonathan; Hardy, Gordon; Dorais, Greg; Poolla, Chaitanya; Reardon, Scott; Soloway, Donald

    2014-01-01

    Flying near the edge of the safe operating envelope is an inherently unsafe proposition. Edge of the envelope here implies that small changes or disturbances in system state or system dynamics can take the system out of the safe envelope in a short time and could result in loss-of-control events. This study evaluated approaches to predicting loss-of-control safety margins as the aircraft gets closer to the edge of the safe operating envelope. The goal of the approach is to provide the pilot aural, visual, and tactile cues focused on maintaining the pilot's control action within predicted loss-of-control boundaries. Our predictive architecture combines quantitative loss-of-control boundaries, an adaptive prediction method to estimate in real-time Markov model parameters and associated stability margins, and a real-time data-based predictive control margins estimation algorithm. The combined architecture is applied to a nonlinear transport class aircraft. Evaluations of various feedback cues using both test and commercial pilots in the NASA Ames Vertical Motion-base Simulator (VMS) were conducted in the summer of 2013. The paper presents results of this evaluation focused on effectiveness of these approaches and the cues in preventing the pilots from entering a loss-of-control event.

  15. Results from Binary Black Hole Simulations in Astrophysics Applications

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2007-01-01

    Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.

  16. Results from the New IGS Time Scale Algorithm (version 2.0)

    NASA Astrophysics Data System (ADS)

    Senior, K.; Ray, J.

    2009-12-01

    Since 2004 the IGS Rapid and Final clock products have been aligned to a highly stable time scale derived from a weighted ensemble of clocks in the IGS network. The time scale is driven mostly by Hydrogen Maser ground clocks though the GPS satellite clocks also carry non-negligible weight, resulting in a time scale having a one-day frequency stability of about 1E-15. However, because of the relatively simple weighting scheme used in the time scale algorithm and because the scale is aligned to UTC by steering it to GPS Time the resulting stability beyond several days suffers. The authors present results of a new 2.0 version of the IGS time scale highlighting the improvements to the algorithm, new modeling considerations, as well as improved time scale stability.

  17. Simulation of diurnal thermal energy storage systems: Preliminary results

    NASA Astrophysics Data System (ADS)

    Katipamula, S.; Somasundaram, S.; Williams, H. R.

    1994-12-01

    This report describes the results of a simulation of thermal energy storage (TES) integrated with a simple-cycle gas turbine cogeneration system. Integrating TES with cogeneration can serve the electrical and thermal loads independently while firing all fuel in the gas turbine. The detailed engineering and economic feasibility of diurnal TES systems integrated with cogeneration systems has been described in two previous PNL reports. The objective of this study was to lay the ground work for optimization of the TES system designs using a simulation tool called TRNSYS (TRaNsient SYstem Simulation). TRNSYS is a transient simulation program with a sequential-modular structure developed at the Solar Energy Laboratory, University of Wisconsin-Madison. The two TES systems selected for the base-case simulations were: (1) a one-tank storage model to represent the oil/rock TES system; and (2) a two-tank storage model to represent the molten nitrate salt TES system. Results of the study clearly indicate that an engineering optimization of the TES system using TRNSYS is possible. The one-tank stratified oil/rock storage model described here is a good starting point for parametric studies of a TES system. Further developments to the TRNSYS library of available models (economizer, evaporator, gas turbine, etc.) are recommended so that the phase-change processes is accurately treated.

  18. Simulating lightning into the RAMS model: implementation and preliminary results

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Petracca, M.; Panegrossi, G.; Sanò, P.; Casella, D.; Dietrich, S.

    2014-05-01

    This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS). The method gives the flash density at the resolution of the RAMS grid-scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity. Results show that the model predicts reasonably well both cases and that the lightning activity is well reproduced especially for the most intense case. However, there are errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the intensity and the evolution of the convection. This shows the importance of the use of computationally efficient lightning schemes, such as the one described in this paper, in forecast models.

  19. A Simulated Annealing Algorithm for the Optimization of Multistage Depressed Collector Efficiency

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.; Wilson, Jeffrey D.; Bulson, Brian A.

    2002-01-01

    The microwave traveling wave tube amplifier (TWTA) is widely used as a high-power transmitting source for space and airborne communications. One critical factor in designing a TWTA is the overall efficiency. However, overall efficiency is highly dependent upon collector efficiency; so collector design is critical to the performance of a TWTA. Therefore, NASA Glenn Research Center has developed an optimization algorithm based on Simulated Annealing to quickly design highly efficient multi-stage depressed collectors (MDC).

  20. A Fourier analysis for a fast simulation algorithm. [for switching converters

    NASA Technical Reports Server (NTRS)

    King, Roger J.

    1988-01-01

    This paper presents a derivation of compact expressions for the Fourier series analysis of the steady-state solution of a typical switching converter. The modeling procedure for the simulation and the steady-state solution is described, and some desirable traits for its matrix exponential subroutine are discussed. The Fourier analysis algorithm was tested on a phase-controlled parallel-loaded resonant converter, providing an experimental confirmation.

  1. A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.

    2005-01-01

    This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.

  2. Effective algorithm for ray-tracing simulations of lobster eye and similar reflective optical systems

    NASA Astrophysics Data System (ADS)

    Tichý, Vladimír; Hudec, René; Němcová, Šárka

    2016-06-01

    The algorithm presented is intended mainly for lobster eye optics. This type of optics (and some similar types) allows for a simplification of the classical ray-tracing procedure that requires great many rays to simulate. The method presented performs the simulation of a only few rays; therefore it is extremely effective. Moreover, to simplify the equations, a specific mathematical formalism is used. Only a few simple equations are used, therefore the program code can be simple as well. The paper also outlines how to apply the method to some other reflective optical systems.

  3. Broadband diffusion metasurface based on a single anisotropic element and optimized by the Simulated Annealing algorithm.

    PubMed

    Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei

    2016-01-01

    We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the "0" and "1" elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements. PMID:27034110

  4. Broadband diffusion metasurface based on a single anisotropic element and optimized by the Simulated Annealing algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei

    2016-04-01

    We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the “0” and “1” elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements.

  5. Broadband diffusion metasurface based on a single anisotropic element and optimized by the Simulated Annealing algorithm

    PubMed Central

    Zhao, Yi; Cao, Xiangyu; Gao, Jun; Sun, Yu; Yang, Huanhuan; Liu, Xiao; Zhou, Yulong; Han, Tong; Chen, Wei

    2016-01-01

    We propose a new strategy to design broadband and wide angle diffusion metasurfaces. An anisotropic structure which has opposite phases under x- and y-polarized incidence is employed as the “0” and “1” elements base on the concept of coding metamaterial. To obtain a uniform backward scattering under normal incidence, Simulated Annealing algorithm is utilized in this paper to calculate the optimal layout. The proposed method provides an efficient way to design diffusion metasurface with a simple structure, which has been proved by both simulations and measurements. PMID:27034110

  6. Effective algorithm for ray-tracing simulations of lobster eye and similar reflective optical systems

    NASA Astrophysics Data System (ADS)

    Tichý, Vladimír; Hudec, René; Němcová, Šárka

    2016-03-01

    The algorithm presented is intended mainly for lobster eye optics. This type of optics (and some similar types) allows for a simplification of the classical ray-tracing procedure that requires great many rays to simulate. The method presented performs the simulation of a only few rays; therefore it is extremely effective. Moreover, to simplify the equations, a specific mathematical formalism is used. Only a few simple equations are used, therefore the program code can be simple as well. The paper also outlines how to apply the method to some other reflective optical systems.

  7. An algorithm for generating nonuniformly space correlated samples for simulating a nonselective Rayleigh fading channel

    NASA Astrophysics Data System (ADS)

    Shein, Norman P.

    A nonselective Rayleigh fading channel model using a time-variant complex multiplier z(t) is considered. Performing a Monte Carlo simulation of this channel requires samples of z(t) with appropriate correlation (fading power spectrum). For an important f-4 spectrum, there is a simple digital implementation that generates uniformly spaced samples. However, many communications systems have faded signals which appear only intermittently at the receiver. Nonuniformly spaced samples are better suited to a simulation of this situation. The author presents an algorithm for efficiently generating nonuniformly spaced correlated samples which have a specified f-4 power spectrum.

  8. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    SciTech Connect

    Tisseur, D. Costin, M. Rattoni, B. Vienne, C. Vabre, A. Cattiaux, G.; Sollier, T.

    2015-03-31

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  9. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    NASA Astrophysics Data System (ADS)

    Tisseur, D.; Costin, M.; Rattoni, B.; Vienne, C.; Vabre, A.; Cattiaux, G.; Sollier, T.

    2015-03-01

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  10. Synthetic line-of-sight algorithms for hardware-in-the-loop simulations

    NASA Astrophysics Data System (ADS)

    Richard, Henri; Lowman, Alan; Ballard, Gary

    2005-05-01

    During the flight of guided submunitions, translation of the missile with respect to the designated aimpoint causes a rotation of the Line-of-Sight (LOS) in inertial space. Large transmit arrays or 5 axis CARCO tables are used to perform True LOS (TLOS) for in-band simulations. Both of these TLOS approaches have cost or fidelity issues for RF seekers. Typically RF Hardware-in-the-Loop (HWIL) simulations of these guided submunitions are mounted on a Three Axes Rotational Flight Simulator (TARFS), which is not capable of translation, and utilize a 2 to 3 seeker beam width transmit array. This necessitates using a Synthetic Line-of-Sight (SLOS) algorithm with the TARFS in order to maintain the proper line-of-sight orientation during all phases of flight which typically includes largely varying LOS motion. This paper presents a simple explanation depicting TLOS and SLOS (TARFS) geometry and the seamless boresight/target SLOS algorithm utilized in AMRDEC's RF4 facility for a test article flight profile. In conclusion this paper will summarize the current state of SLOS algorithms utilized at AMRDEC and challenges and possible solutions envisioned in the near future.

  11. Recent results in analysis and simulation of beam halo

    SciTech Connect

    Ryne, Robert D.; Wangler, Thomas P.

    1995-09-15

    Understanding and predicting beam halo is a major issue for accelerator driven transmutation technologies. If strict beam loss requirements are not met, the resulting radioactivation can reduce the availability of the accelerator facility and may lead to the necessity for time-consuming remote maintenance. Recently there has been much activity related to the core-halo model of halo evolution [1-5]. In this paper we will discuss the core-halo model in the context of constant focusing channels and periodic focusing channels. We will present numerical results based on this model and we will show comparisons with results from large scale particle simulations run on a massively parallel computer. We will also present results from direct Vlasov simulations.

  12. Recent results in analysis and simulation of beam halo

    SciTech Connect

    Ryne, R.D.; Wangler, T.P.

    1994-09-01

    Understanding and predicting beam halo is a major issue for accelerator driven transmutation technologies. If strict beam loss requirements are not met, the resulting radioactivation can reduce the availability of the accelerator facility and may lead to the necessity for time-consuming remote maintenance. Recently there has been much activity related to the core-halo model of halo evolution. In this paper the authors will discuss the core-halo model in the context of constant focusing channels and periodic focusing channels. They will present numerical results based on this model and they will show comparisons with results from large scale particle simulations run on a massively parallel computer. They will also present results from direct Vlasov simulations.

  13. Symmetric tensor networks and practical simulation algorithms to sharply identify classes of quantum phases distinguishable by short-range physics

    NASA Astrophysics Data System (ADS)

    Ran, Ying; Jiang, Shenghan

    Phases of matter are sharply defined in the thermodynamic limit. One major challenge of accurately simulating quantum phase diagrams of interacting quantum systems is due to the fact that numerical simulations usually deal with the energy density, a local property of quantum wavefunctions, while identifying different quantum phases generally rely on long-range physics. In this paper we construct generic fully symmetric quantum wavefunctions under certain assumptions using a type of tensor networks: projected entangled pair states, and provide practical simulation algorithms based on them. We find that quantum phases can be organized into crude classes distinguished by short-range physics, which is related to the fractionalization of both on-site symmetries and space-group symmetries. Consequently, our simulation algorithms, which are useful to study long-range physics as well, are expected to be able to sharply determine crude classes in interacting quantum systems efficiently. Examples of these crude classes are demonstrated in half-integer quantum spin systems on the kagome lattice. Limitations and generalizations of our results are discussed. The Alfred P. Sloan fellowship and National Science Foundation under Grant No. DMR-1151440.

  14. LENS: μLENS Simulations, Analysis, and Results

    NASA Astrophysics Data System (ADS)

    Rasco, Charles

    2013-04-01

    Simulations of the Low-Energy Neutrino Spectrometer prototype, μLENS, have been performed in order to benchmark the first measurements of the μLENS detector at the Kimballton Underground Research Facility (KURF). μLENS is a 6x6x6 celled scintillation lattice filled with Linear Alkylbenzene based scintillator. We have performed simulations of μLENS using the GEANT4 toolkit. We have measured various radioactive sources, LEDs, and environmental background radiation measurements at KURF using up to 96 PMTs with a simplified data acquisition system of QDCs and TDCs. In this talk we will demonstrate our understanding of the light propagation and we will compare simulation results with measurements of the μLENS detector of various radioactive sources, LEDs, and the environmental background radiation.

  15. Ca-Pri a Cellular Automata Phenomenological Research Investigation: Simulation Results

    NASA Astrophysics Data System (ADS)

    Iannone, G.; Troisi, A.

    2013-05-01

    Following the introduction of a phenomenological cellular automata (CA) model capable to reproduce city growth and urban sprawl, we develop a toy model simulation considering a realistic framework. The main characteristic of our approach is an evolution algorithm based on inhabitants preferences. The control of grown cells is obtained by means of suitable functions which depend on the initial condition of the simulation. New born urban settlements are achieved by means of a logistic evolution of the urban pattern while urban sprawl is controlled by means of the population evolution function. In order to compare model results with a realistic urban framework we have considered, as the area of study, the island of Capri (Italy) in the Mediterranean Sea. Two different phases of the urban evolution on the island have been taken into account: a new born initial growth as induced by geographic suitability and the simulation of urban spread after 1943 induced by the population evolution after this date.

  16. Automatic tuning of liver tissue model using simulated annealing and genetic algorithm heuristic approaches

    NASA Astrophysics Data System (ADS)

    Sulaiman, Salina; Bade, Abdullah; Lee, Rechard; Tanalol, Siti Hasnah

    2014-07-01

    Mass Spring Model (MSM) is a highly efficient model in terms of calculations and easy implementation. Mass, spring stiffness coefficient and damping constant are three major components of MSM. This paper focuses on identifying the coefficients of spring stiffness and damping constant using automated tuning method by optimization in generating human liver model capable of responding quickly. To achieve the objective two heuristic approaches are used, namely Simulated Annealing (SA) and Genetic Algorithm (GA) on the human liver model data set. The properties of the mechanical heart, which are taken into consideration, are anisotropy and viscoelasticity. Optimization results from SA and GA are then implemented into the MSM to model two human hearts, each with its SA or GA construction parameters. These techniques are implemented while making FEM construction parameters as benchmark. Step size response of both models are obtained after MSMs were solved using Fourth Order Runge-Kutta (RK4) to compare the elasticity response of both models. Remodelled time using manual calculation methods was compared against heuristic optimization methods of SA and GA in showing that model with automatic construction is more realistic in terms of realtime interaction response time. Liver models generated using SA and GA optimization techniques are compared with liver model from manual calculation. It shows that the reconstruction time required for 1000 repetitions of SA and GA is faster than the manual method. Meanwhile comparison between construction time of SA and GA model indicates that model SA is faster than GA with varying rates of time 0.110635 seconds/1000 repetitions. Real-time interaction of mechanical properties is dependent on rate of time and speed of remodelling process. Thus, the SA and GA have proven to be suitable in enhancing realism of simulated real-time interaction in liver remodelling.

  17. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    NASA Astrophysics Data System (ADS)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  18. Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.

    PubMed

    Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F

    2016-01-01

    In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed. PMID:27386338

  19. Primary simulation and experimental results of a coaxial plasma accelerator

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Huang, J.; Han, J.; Zhang, Z.; Quan, R.; Wang, L.; Yang, X.; Feng, C.

    A coaxial plasma accelerator with a compressing coil is developed to simulate the impacting and erosion effect of space debris on exposed materials of spacecrafts During its adjustment operation some measurements are conducted including discharging current by Rogowski coil average plasma speed in the coaxial gun by magnetic coils and ejected particle speed by piezoelectric sensor etc In concert with the experiment a primary physical model is constructed in which only the coaxial gun is taken into account with the compressor coil not considered for its unimportant contribution to the plasma ejection speed The calculation results by the model agree well with the diagnostic results considering some assumptions for simplification Based on the simulation result some important suggestions for optimum design and adjustment of the accelerator are obtained for its later operation

  20. ANOVA parameters influence in LCF experimental data and simulation results

    NASA Astrophysics Data System (ADS)

    Delprete, C.; Sesanaa, R.; Vercelli, A.

    2010-06-01

    The virtual design of components undergoing thermo mechanical fatigue (TMF) and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation) and the damage and life model (for life assessment). The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF) tests, low cycle fatigue (LCF) tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo structural FEM

  1. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  2. Real-time dynamics simulation of the Cassini spacecraft using DARTS. Part 1: Functional capabilities and the spatial algebra algorithm

    NASA Technical Reports Server (NTRS)

    Jain, A.; Man, G. K.

    1993-01-01

    This paper describes the Dynamics Algorithms for Real-Time Simulation (DARTS) real-time hardware-in-the-loop dynamics simulator for the National Aeronautics and Space Administration's Cassini spacecraft. The spacecraft model consists of a central flexible body with a number of articulated rigid-body appendages. The demanding performance requirements from the spacecraft control system require the use of a high fidelity simulator for control system design and testing. The DARTS algorithm provides a new algorithmic and hardware approach to the solution of this hardware-in-the-loop simulation problem. It is based upon the efficient spatial algebra dynamics for flexible multibody systems. A parallel and vectorized version of this algorithm is implemented on a low-cost, multiprocessor computer to meet the simulation timing requirements.

  3. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    Terrestrial evaporation (ET) links the continental water, energy and carbon cycles. Understanding the magnitude and variability of ET at the global scale is an essential step towards reducing uncertainties in our projections of climatic conditions and water availability for the future. However, the requirement of global observational data of ET can neither be satisfied with our sparse global in-situ networks, nor with the existing satellite sensors (which cannot measure evaporation directly from space). This situation has led to the recent rise of several algorithms dedicated to deriving ET fields from satellite data indirectly, based on the combination of ET-drivers that can be observed from space (e.g. radiation, temperature, phenological variability, water content, etc.). These algorithms can either be based on physics (e.g. Priestley and Taylor or Penman-Monteith approaches) or be purely statistical (e.g., machine learning). However, and despite the efforts from different initiatives like GEWEX LandFlux (Jimenez et al., 2011; Mueller et al., 2013), the uncertainties inherent in the resulting global ET datasets remain largely unexplored, partly due to a lack of inter-product consistency in forcing data. In response to this need, the ESA WACMOS-ET project started in 2012 with the main objectives of (a) developing a Reference Input Data Set to derive and validate ET estimates, and (b) performing a cross-comparison, error characterization and validation exercise of a group of selected ET algorithms driven by this Reference Input Data Set and by in-situ forcing data. The algorithms tested are SEBS (Su et al., 2002), the Penman- Monteith approach from MODIS (Mu et al., 2011), the Priestley and Taylor JPL model (Fisher et al., 2008), the MPI-MTE model (Jung et al., 2010) and GLEAM (Miralles et al., 2011). In this presentation we will show the first results from the ESA WACMOS-ET project. The performance of the different algorithms at multiple spatial and temporal

  4. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results.

    SciTech Connect

    Swiler, Laura Painton; Eldred, Michael Scott

    2009-09-01

    This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

  5. Algorithm and simulation development in support of response strategies for contamination events in air and water systems.

    SciTech Connect

    Waanders, Bart Van Bloemen

    2006-01-01

    Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

  6. A novel algorithm for solving the true coincident counting issues in Monte Carlo simulations for radiation spectroscopy.

    PubMed

    Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A

    2015-06-01

    Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel. PMID:25905518

  7. A treatment algorithm for patients with large skull bone defects and first results.

    PubMed

    Lethaus, Bernd; Ter Laak, Marielle Poort; Laeven, Paul; Beerens, Maikel; Koper, David; Poukens, Jules; Kessler, Peter

    2011-09-01

    Large skull bone defects resulting from craniotomies due to cerebral insults, trauma or tumours create functional and aesthetic disturbances to the patient. The reconstruction of large osseous defects is still challenging. A treatment algorithm is presented based on the close interaction of radiologists, computer engineers and cranio-maxillofacial surgeons. From 2004 until today twelve consecutive patients have been operated on successfully according to this treatment plan. Titanium and polyetheretherketone (PEEK) were used to manufacture the implants. The treatment algorithm is proved to be reliable. No corrections had to be performed either to the skull bone or to the implant. Short operations and hospitalization periods are essential prerequisites for treatment success and justify the high expenses. PMID:21055960

  8. Knowledge-Aided Multichannel Adaptive SAR/GMTI Processing: Algorithm and Experimental Results

    NASA Astrophysics Data System (ADS)

    Wu, Di; Zhu, Daiyin; Zhu, Zhaoda

    2010-12-01

    The multichannel synthetic aperture radar ground moving target indication (SAR/GMTI) technique is a simplified implementation of space-time adaptive processing (STAP), which has been proved to be feasible in the past decades. However, its detection performance will be degraded in heterogeneous environments due to the rapidly varying clutter characteristics. Knowledge-aided (KA) STAP provides an effective way to deal with the nonstationary problem in real-world clutter environment. Based on the KA STAP methods, this paper proposes a KA algorithm for adaptive SAR/GMTI processing in heterogeneous environments. It reduces sample support by its fast convergence properties and shows robust to non-stationary clutter distribution relative to the traditional adaptive SAR/GMTI scheme. Experimental clutter suppression results are employed to verify the virtue of this algorithm.

  9. Performance analysis results of a battery fuel gauge algorithm at multiple temperatures

    NASA Astrophysics Data System (ADS)

    Balasingam, B.; Avvari, G. V.; Pattipati, K. R.; Bar-Shalom, Y.

    2015-01-01

    Evaluating a battery fuel gauge (BFG) algorithm is a challenging problem due to the fact that there are no reliable mathematical models to represent the complex features of a Li-ion battery, such as hysteresis and relaxation effects, temperature effects on parameters, aging, power fade (PF), and capacity fade (CF) with respect to the chemical composition of the battery. The existing literature is largely focused on developing different BFG strategies and BFG validation has received little attention. In this paper, using hardware in the loop (HIL) data collected form three Li-ion batteries at nine different temperatures ranging from -20 °C to 40 °C, we demonstrate detailed validation results of a battery fuel gauge (BFG) algorithm. The BFG validation is based on three different BFG validation metrics; we provide implementation details of these three BFG evaluation metrics by proposing three different BFG validation load profiles that satisfy varying levels of user requirements.

  10. Optimal groundwater remediation design of pump and treat systems via a simulation-optimization approach and firefly algorithm

    NASA Astrophysics Data System (ADS)

    Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard

    2015-01-01

    In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.

  11. Comparison between simulated annealing algorithms and rapid chain delineation in the construction of genetic maps.

    PubMed

    Nascimento, Moysés; Cruz, Cosme Damião; Peternelli, Luiz Alexandre; Campana, Ana Carolina Mota

    2010-04-01

    The efficiency of simulated annealing algorithms and rapid chain delineation in establishing the best linkage order, when constructing genetic maps, was evaluated. Linkage refers to the phenomenon by which two or more genes, or even more molecular markers, can be present in the same chromosome or linkage group. In order to evaluate the capacity of algorithms, four F(2) co-dominant populations, 50, 100, 200 and 1000 in size, were simulated. For each population, a genome with four linkage groups (100 cM) was generated. The linkage groups possessed 51, 21, 11 and 6 marks, respectively, and a corresponding distance of 2, 5, 10 and 20 cM between adjacent marks, thereby causing various degrees of saturation. For very saturated groups, with an adjacent distance between marks of 2 cM and in greater number, i.e., 51, the method based upon stochastic simulation by simulated annealing presented orders with distances equivalent to or lower than rapid chain delineation. Otherwise, the two methods were commensurate through presenting the same SARF distance. PMID:21637501

  12. Comparison between simulated annealing algorithms and rapid chain delineation in the construction of genetic maps

    PubMed Central

    2010-01-01

    The efficiency of simulated annealing algorithms and rapid chain delineation in establishing the best linkage order, when constructing genetic maps, was evaluated. Linkage refers to the phenomenon by which two or more genes, or even more molecular markers, can be present in the same chromosome or linkage group. In order to evaluate the capacity of algorithms, four F2 co-dominant populations, 50, 100, 200 and 1000 in size, were simulated. For each population, a genome with four linkage groups (100 cM) was generated. The linkage groups possessed 51, 21, 11 and 6 marks, respectively, and a corresponding distance of 2, 5, 10 and 20 cM between adjacent marks, thereby causing various degrees of saturation. For very saturated groups, with an adjacent distance between marks of 2 cM and in greater number, i.e., 51, the method based upon stochastic simulation by simulated annealing presented orders with distances equivalent to or lower than rapid chain delineation. Otherwise, the two methods were commensurate through presenting the same SARF distance. PMID:21637501

  13. Multiple Frequency Contrast Source Inversion Method for Vertical Electromagnetic Profiling: 2D Simulation Results and Analyses

    NASA Astrophysics Data System (ADS)

    Li, Jinghe; Song, Linping; Liu, Qing Huo

    2016-02-01

    A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.

  14. Preliminary Simulation Results of the 23 June, 2001 Peruvian Tsunami

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Koshimura, S.; Ortiz, M.; Borrero, J.

    2001-12-01

    The tsunami generated by the June 23, 2001 Peruvian earthquake devastated a 50--km section of coast near the earthquake epicenter and was recorded on tide-gages throughout the Pacific. The coastal town of Camana sustained the most damage with tsunami waves penetrating up to 1--km inland and runup exceeding 5--m. The extreme local effects and widespread impact motivated modeling efforts to produce a realistic tsunami simulation of this event. Preliminary results were produced by the TIME center using two resident numerical models, TUNAMI--2 and MOST. Both models were used to produce preliminary simulations shortly after the earthquake, and first results were posted on the Internet a day after the event (http://www.pmel.noaa.gov/tsunami/peru_pmel.html). These numerical results aimed to quantify the magnitude of the tsunami and, to certain extent, to guide the post-tsunami survey. The first simulations have been revised using new data about the seismic source and the results of the post-tsunami survey. Measured inundation distances, flow depths, and runup along topographic transects are used to constrain the inundation model. Preliminary numerical analysis of tsunami inundation will be presented.

  15. Algorithms for personalized therapy of type 2 diabetes: results of a web-based international survey

    PubMed Central

    Gallo, Marco; Mannucci, Edoardo; De Cosmo, Salvatore; Gentile, Sandro; Candido, Riccardo; De Micheli, Alberto; Di Benedetto, Antonino; Esposito, Katherine; Genovese, Stefano; Medea, Gerardo; Ceriello, Antonio

    2015-01-01

    Objective In recent years increasing interest in the issue of treatment personalization for type 2 diabetes (T2DM) has emerged. This international web-based survey aimed to evaluate opinions of physicians about tailored therapeutic algorithms developed by the Italian Association of Diabetologists (AMD) and available online, and to get suggestions for future developments. Another aim of this initiative was to assess whether the online advertising and the survey would have increased the global visibility of the AMD algorithms. Research design and methods The web-based survey, which comprised five questions, has been available from the homepage of the web-version of the journal Diabetes Care throughout the month of December 2013, and on the AMD website between December 2013 and September 2014. Participation was totally free and responders were anonymous. Results Overall, 452 physicians (M=58.4%) participated in the survey. Diabetologists accounted for 76.8% of responders. The results of the survey show wide agreement (>90%) by participants on the utility of the algorithms proposed, even if they do not cover all possible needs of patients with T2DM for a personalized therapeutic approach. In the online survey period and in the months after its conclusion, a relevant and durable increase in the number of unique users who visited the websites was registered, compared to the period preceding the survey. Conclusions Patients with T2DM are heterogeneous, and there is interest toward accessible and easy to use personalized therapeutic algorithms. Responders opinions probably reflect the peculiar organization of diabetes care in each country. PMID:26301097

  16. Simulating lightning into the RAMS model: implementation and preliminary results

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Petracca, M.; Panegrossi, G.; Sanò, P.; Casella, D.; Dietrich, S.

    2014-11-01

    This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS). The method gives the flash density at the resolution of the RAMS grid scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity which occurred, respectively, on 20 October 2011 and on 15 October 2012. The number of flashes simulated (observed) over Lazio is 19435 (16231) for the first case and 7012 (4820) for the second case, and the model correctly reproduces the larger number of flashes that characterized the 20 October 2011 event compared to the 15 October 2012 event. There are, however, errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. For the 20 October 2011 case study, spatial errors are of the order of a few tens of kilometres and the timing of the event is correctly simulated. For the 15 October 2012 case study, the spatial error in the positioning of the convection is of the order of 100 km and the event has a longer duration in the simulation than in the reality. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the

  17. Enhanced vision systems: results of simulation and operational tests

    NASA Astrophysics Data System (ADS)

    Hecker, Peter; Doehler, Hans-Ullrich

    1998-07-01

    Today's aircrews have to handle more and more complex situations. Most critical tasks in the field of civil aviation are landing approaches and taxiing. Especially under bad weather conditions the crew has to handle a tremendous workload. Therefore DLR's Institute of Flight Guidance has developed a concept for an enhanced vision system (EVS), which increases performance and safety of the aircrew and provides comprehensive situational awareness. In previous contributions some elements of this concept have been presented, i.e. the 'Simulation of Imaging Radar for Obstacle Detection and Enhanced Vision' by Doehler and Bollmeyer 1996. Now the presented paper gives an overview about the DLR's enhanced vision concept and research approach, which consists of two main components: simulation and experimental evaluation. In a first step the simulational environment for enhanced vision research with a pilot-in-the-loop is introduced. An existing fixed base flight simulator is supplemented by real-time simulations of imaging sensors, i.e. imaging radar and infrared. By applying methods of data fusion an enhanced vision display is generated combining different levels of information, such as terrain model data, processed images acquired by sensors, aircraft state vectors and data transmitted via datalink. The second part of this contribution presents some experimental results. In cooperation with Daimler Benz Aerospace Sensorsystems Ulm, a test van and a test aircraft were equipped with a prototype of an imaging millimeter wave radar. This sophisticated HiVision Radar is up to now one of the most promising sensors for all weather operations. Images acquired by this sensor are shown as well as results of data fusion processes based on digital terrain models. The contribution is concluded by a short video presentation.

  18. Key results from SB8 simulant flowsheet studies

    SciTech Connect

    Koopman, D. C.

    2013-04-26

    Key technically reviewed results are presented here in support of the Defense Waste Processing Facility (DWPF) acceptance of Sludge Batch 8 (SB8). This report summarizes results from simulant flowsheet studies of the DWPF Chemical Process Cell (CPC). Results include: Hydrogen generation rate for the Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) cycles of the CPC on a 6,000 gallon basis; Volume percent of nitrous oxide, N2O, produced during the SRAT cycle; Ammonium ion concentrations recovered from the SRAT and SME off-gas; and, Dried weight percent solids (insoluble, soluble, and total) measurements and density.

  19. An assessment of coupling algorithms for nuclear reactor core physics simulations

    NASA Astrophysics Data System (ADS)

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby

    2016-04-01

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss-Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton-Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.

  20. An adaptive algorithm for simulation of stochastic reaction-diffusion processes

    SciTech Connect

    Ferm, Lars Hellander, Andreas Loetstedt, Per

    2010-01-20

    We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.

  1. Parallel Simulation Algorithms for the Three Dimensional Strong-Strong Beam-Beam Interaction

    SciTech Connect

    Kabel, A.C.; /SLAC

    2008-03-17

    The strong-strong beam-beam effect is one of the most important effects limiting the luminosity of ring colliders. Little is known about it analytically, so most studies utilize numeric simulations. The two-dimensional realm is readily accessible to workstation-class computers (cf.,e.g.,[1, 2]), while three dimensions, which add effects such as phase averaging and the hourglass effect, require vastly higher amounts of CPU time. Thus, parallelization of three-dimensional simulation techniques is imperative; in the following we discuss parallelization strategies and describe the algorithms used in our simulation code, which will reach almost linear scaling of performance vs. number of CPUs for typical setups.

  2. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE PAGESBeta

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; Pawlowski, Roger; Toth, Alex; Kelley, C. T.; Evans, Thomas; Philip, Bobby

    2016-04-01

    Here we evaluate the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product was developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNK andmore » Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Finally, both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  3. Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.

  4. Simulation-Based Evaluation of the Performances of an Algorithm for Detecting Abnormal Disease-Related Features in Cattle Mortality Records

    PubMed Central

    Perrin, Jean-Baptiste; Durand, Benoît; Gay, Emilie; Ducrot, Christian; Hendrikx, Pascal; Calavas, Didier; Hénaux, Viviane

    2015-01-01

    We performed a simulation study to evaluate the performances of an anomaly detection algorithm considered in the frame of an automated surveillance system of cattle mortality. The method consisted in a combination of temporal regression and spatial cluster detection which allows identifying, for a given week, clusters of spatial units showing an excess of deaths in comparison with their own historical fluctuations. First, we simulated 1,000 outbreaks of a disease causing extra deaths in the French cattle population (about 200,000 herds and 20 million cattle) according to a model mimicking the spreading patterns of an infectious disease and injected these disease-related extra deaths in an authentic mortality dataset, spanning from January 2005 to January 2010. Second, we applied our algorithm on each of the 1,000 semi-synthetic datasets to identify clusters of spatial units showing an excess of deaths considering their own historical fluctuations. Third, we verified if the clusters identified by the algorithm did contain simulated extra deaths in order to evaluate the ability of the algorithm to identify unusual mortality clusters caused by an outbreak. Among the 1,000 simulations, the median duration of simulated outbreaks was 8 weeks, with a median number of 5,627 simulated deaths and 441 infected herds. Within the 12-week trial period, 73% of the simulated outbreaks were detected, with a median timeliness of 1 week, and a mean of 1.4 weeks. The proportion of outbreak weeks flagged by an alarm was 61% (i.e. sensitivity) whereas one in three alarms was a true alarm (i.e. positive predictive value). The performances of the detection algorithm were evaluated for alternative combination of epidemiologic parameters. The results of our study confirmed that in certain conditions automated algorithms could help identifying abnormal cattle mortality increases possibly related to unidentified health events. PMID:26536596

  5. The updated algorithm of the Energy Consumption Program (ECP): A computer model simulating heating and cooling energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.

    1979-01-01

    The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.

  6. Orion Guidance and Control Ascent Abort Algorithm Design and Performance Results

    NASA Technical Reports Server (NTRS)

    Proud, Ryan W.; Bendle, John R.; Tedesco, Mark B.; Hart, Jeremy J.

    2009-01-01

    During the ascent flight phase of NASA s Constellation Program, the Ares launch vehicle propels the Orion crew vehicle to an agreed to insertion target. If a failure occurs at any point in time during ascent then a system must be in place to abort the mission and return the crew to a safe landing with a high probability of success. To achieve continuous abort coverage one of two sets of effectors is used. Either the Launch Abort System (LAS), consisting of the Attitude Control Motor (ACM) and the Abort Motor (AM), or the Service Module (SM), consisting of SM Orion Main Engine (OME), Auxiliary (Aux) Jets, and Reaction Control System (RCS) jets, is used. The LAS effectors are used for aborts from liftoff through the first 30 seconds of second stage flight. The SM effectors are used from that point through Main Engine Cutoff (MECO). There are two distinct sets of Guidance and Control (G&C) algorithms that are designed to maximize the performance of these abort effectors. This paper will outline the necessary inputs to the G&C subsystem, the preliminary design of the G&C algorithms, the ability of the algorithms to predict what abort modes are achievable, and the resulting success of the abort system. Abort success will be measured against the Preliminary Design Review (PDR) abort performance metrics and overall performance will be reported. Finally, potential improvements to the G&C design will be discussed.

  7. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  8. Preliminary Results of Laboratory Simulation of Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Zhang, Shou-Biao; Xie, Jin-Lin; Hu, Guang-Hai; Li, Hong; Huang, Guang-Li; Liu, Wan-Dong

    2011-10-01

    In the Linear Magnetized Plasma (LMP) device of University of Science and Technology of China and by exerting parallel currents on two parallel copper plates, we have realized the magnetic reconnection in laboratory plasma. With the emissive probes, we have measured the parallel (along the axial direction) electric field in the process of reconnection, and verified the dependence of reconnection current on passing particles. Using the magnetic probe, we have measured the time evolution of magnetic flux, and the measured result shows no pileup of magnetic flux, in consistence with the result of numerical simulation.

  9. Fast and robust segmentation of solar EUV images: algorithm and results for solar cycle 23

    NASA Astrophysics Data System (ADS)

    Barra, V.; Delouille, V.; Kretzschmar, M.; Hochedez, J.-F.

    2009-10-01

    Context: The study of the variability of the solar corona and the monitoring of coronal holes, quiet sun and active regions are of great importance in astrophysics as well as for space weather and space climate applications. Aims: In a previous work, we presented the spatial possibilistic clustering algorithm (SPoCA). This is a multi-channel unsupervised spatially-constrained fuzzy clustering method that automatically segments solar extreme ultraviolet (EUV) images into regions of interest. The results we reported on SoHO-EIT images taken from February 1997 to May 2005 were consistent with previous knowledge in terms of both areas and intensity estimations. However, they presented some artifacts due to the method itself. Methods: Herein, we propose a new algorithm, based on SPoCA, that removes these artifacts. We focus on two points: the definition of an optimal clustering with respect to the regions of interest, and the accurate definition of the cluster edges. We moreover propose methodological extensions to this method, and we illustrate these extensions with the automatic tracking of active regions. Results: The much improved algorithm can decompose the whole set of EIT solar images over the 23rd solar cycle into regions that can clearly be identified as quiet sun, coronal hole and active region. The variations of the parameters resulting from the segmentation, i.e. the area, mean intensity, and relative contribution to the solar irradiance, are consistent with previous results and thus validate the decomposition. Furthermore, we find indications for a small variation of the mean intensity of each region in correlation with the solar cycle. Conclusions: The method is generic enough to allow the introduction of other channels or data. New applications are now expected, e.g. related to SDO-AIA data.

  10. A novel simulation algorithm on ultrasonic image based on triangular planar transducers

    NASA Astrophysics Data System (ADS)

    Li, Yaqin; Wang, Xuan; Li, Shigao; Zhang, Cong; Sun, Kaiqiong

    2015-12-01

    Calculation of ultrasonic field based on medical transducers is often done by applying acoustics and using the Tupholestetpanishen method of calculation. The calculation is based on spatial impulse response; the spatial impulse response has only been determined analytical for a few geometries and using apodization over the transducer surface generally make its impossible to find the response analytically. A popular approach to find the general field is thus to split the aperture into small rectangles, and then sum the weighted response from each of these. The problem with triangular is their poor fit apertures which do not have straight edges, such as circular and oval shapes. In order to solve the problem, a novel algorithm based on triangular be proposed in the paper, the simulation of ultrasonic field based on the algorithm can be improved obviously.

  11. Parallel two-level domain decomposition based Jacobi-Davidson algorithms for pyramidal quantum dot simulation

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Hwang, Feng-Nan; Cai, Xiao-Chuan

    2016-07-01

    We consider a quintic polynomial eigenvalue problem arising from the finite volume discretization of a quantum dot simulation problem. The problem is solved by the Jacobi-Davidson (JD) algorithm. Our focus is on how to achieve the quadratic convergence of JD in a way that is not only efficient but also scalable when the number of processor cores is large. For this purpose, we develop a projected two-level Schwarz preconditioned JD algorithm that exploits multilevel domain decomposition techniques. The pyramidal quantum dot calculation is carefully studied to illustrate the efficiency of the proposed method. Numerical experiments confirm that the proposed method has a good scalability for problems with hundreds of millions of unknowns on a parallel computer with more than 10,000 processor cores.

  12. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  13. Airflow Hazard Visualization for Helicopter Pilots: Flight Simulation Study Results

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.; Long, Kurtis R.

    2005-01-01

    Airflow hazards such as vortices or low level wind shear have been identified as a primary contributing factor in many helicopter accidents. US Navy ships generate airwakes over their decks, creating potentially hazardous conditions for shipboard rotorcraft launch and recovery. Recent sensor developments may enable the delivery of airwake data to the cockpit, where visualizing the hazard data may improve safety and possibly extend ship/helicopter operational envelopes. A prototype flight-deck airflow hazard visualization system was implemented on a high-fidelity rotorcraft flight dynamics simulator. Experienced helicopter pilots, including pilots from all five branches of the military, participated in a usability study of the system. Data was collected both objectively from the simulator and subjectively from post-test questionnaires. Results of the data analysis are presented, demonstrating a reduction in crash rate and other trends that illustrate the potential of airflow hazard visualization to improve flight safety.

  14. BWR Full Integral Simulation Test (FIST). Phase I test results

    SciTech Connect

    Hwang, W S; Alamgir, M; Sutherland, W A

    1984-09-01

    A new full height BWR system simulator has been built under the Full-Integral-Simulation-Test (FIST) program to investigate the system responses to various transients. The test program consists of two test phases. This report provides a summary, discussions, highlights and conclusions of the FIST Phase I tests. Eight matrix tests were conducted in the FIST Phase I. These tests have investigated the large break, small break and steamline break LOCA's, as well as natural circulation and power transients. Results and governing phenomena of each test have been evaluated and discussed in detail in this report. One of the FIST program objectives is to assess the TRAC code by comparisons with test data. Two pretest predictions made with TRACB02 are presented and compared with test data in this report.

  15. Simulation of flow in the microcirculation using a hybrid Lattice-Boltzman and Finite Element algorithm

    NASA Astrophysics Data System (ADS)

    Gonzalez-Mancera, Andres; Gonzalez Cardenas, Diego

    2014-11-01

    Flow in the microcirculation is highly dependent on the mechanical properties of the cells suspended in the plasma. Red blood cells have to deform in order to pass through the smaller sections in the microcirculation. Certain deceases change the mechanical properties of red blood cells affecting its ability to deform and the rheological behaviour of blood. We developed a hybrid algorithm based on the Lattice-Boltzmann and Finite Element methods to simulate blood flow in small capillaries. Plasma was modeled as a Newtonian fluid and the red blood cells' membrane as a hyperelastic solid. The fluid-structure interaction was handled using the immersed boundary method. We simulated the flow of plasma with suspended red blood cells through cylindrical capillaries and measured the pressure drop as a function of the membrane's rigidity. We also simulated the flow through capillaries with a restriction and identify critical properties for which the suspended particles are unable to flow. The algorithm output was verified by reproducing certain common features of flow int he microcirculation such as the Fahraeus-Lindqvist effect.

  16. A parallel algorithm for transient solid dynamics simulations with contact detection

    SciTech Connect

    Attaway, S.; Hendrickson, B.; Plimpton, S.; Gardner, D.; Vaughan, C.; Heinstein, M.; Peery, J.

    1996-06-01

    Solid dynamics simulations with Lagrangian finite elements are used to model a wide variety of problems, such as the calculation of impact damage to shipping containers for nuclear waste and the analysis of vehicular crashes. Using parallel computers for these simulations has been hindered by the difficulty of searching efficiently for material surface contacts in parallel. A new parallel algorithm for calculation of arbitrary material contacts in finite element simulations has been developed and implemented in the PRONTO3D transient solid dynamics code. This paper will explore some of the issues involved in developing efficient, portable, parallel finite element models for nonlinear transient solid dynamics simulations. The contact-detection problem poses interesting challenges for efficient implementation of a solid dynamics simulation on a parallel computer. The finite element mesh is typically partitioned so that each processor owns a localized region of the finite element mesh. This mesh partitioning is optimal for the finite element portion of the calculation since each processor must communicate only with the few connected neighboring processors that share boundaries with the decomposed mesh. However, contacts can occur between surfaces that may be owned by any two arbitrary processors. Hence, a global search across all processors is required at every time step to search for these contacts. Load-imbalance can become a problem since the finite element decomposition divides the volumetric mesh evenly across processors but typically leaves the surface elements unevenly distributed. In practice, these complications have been limiting factors in the performance and scalability of transient solid dynamics on massively parallel computers. In this paper the authors present a new parallel algorithm for contact detection that overcomes many of these limitations.

  17. Resilient algorithms for reconstructing and simulating gappy flow fields in CFD

    NASA Astrophysics Data System (ADS)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2015-10-01

    It is anticipated that in future generations of massively parallel computer systems a significant portion of processors may suffer from hardware or software faults rendering large-scale computations useless. In this work we address this problem from the algorithmic side, proposing resilient algorithms that can recover from such faults irrespective of their fault origin. In particular, we set the foundations of a new class of algorithms that will combine numerical approximations with machine learning methods. To this end, we consider three types of fault scenarios: (1) a gappy region but with no previous gaps and no contamination of surrounding simulation data, (2) a space-time gappy region but with full spatiotemporal information and no contamination, and (3) previous gaps with contamination of surrounding data. To recover from such faults we employ different reconstruction and simulation methods, namely the projective integration, the co-Kriging interpolation, and the resimulation method. In order to compare the effectiveness of these methods for the different processor faults and to quantify the error propagation in each case, we perform simulations of two benchmark flows, flow in a cavity and flow past a circular cylinder. In general, the projective integration seems to be the most effective method when the time gaps are small, and the resimulation method is the best when the time gaps are big while the co-Kriging method is independent of time gaps. Furthermore, the projective integration method and the co-Kriging method are found to be good estimation methods for the initial and boundary conditions of the resimulation method in scenario (3).

  18. Modeling results for a linear simulator of a divertor

    SciTech Connect

    Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.

    1993-06-23

    A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach {approximately} 1 Gw/m{sup 2} along the magnetic fieldlines and > 10 MW/m{sup 2} on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report.

  19. Ultrafast vectorized multispin coding algorithm for the Monte Carlo simulation of the 3D Ising model

    NASA Astrophysics Data System (ADS)

    Wansleben, Stephan

    1987-02-01

    A new Monte Carlo algorithm for the 3D Ising model and its implementation on a CDC CYBER 205 is presented. This approach is applicable to lattices with sizes between 3·3·3 and 192·192·192 with periodic boundary conditions, and is adjustable to various kinetic models. It simulates a canonical ensemble at given temperature generating a new random number for each spin flip. For the Metropolis transition probability the speed is 27 ns per updates on a two-pipe CDC Cyber 205 with 2 million words physical memory, i.e. 1.35 times the cycle time per update or 38 million updates per second.

  20. AVR microcontroller simulator for software implemented hardware fault tolerance algorithms research

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam; Tarnowski, Szymon; Napieralski, Andrzej

    2008-01-01

    Reliability of new, advanced electronic systems becomes a serious problem especially in places like accelerators and synchrotrons, where sophisticated digital devices operate closely to radiation sources. One of the possible solutions to harden the microprocessor-based system is a strict programming approach known as the Software Implemented Hardware Fault Tolerance. Unfortunately, in real environments it is not possible to perform precise and accurate tests of the new algorithms due to hardware limitation. This paper highlights the AVR-family microcontroller simulator project equipped with an appropriate monitoring and the SEU injection systems.

  1. Simulation of 3D MRI brain images for quantitative evaluation of image segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Obladen, Thorsten; Sabri, Osama; Buell, Udalrich

    2000-06-01

    To model the true shape of MRI brain images, automatically classified T1-weighted 3D MRI images (gray matter, white matter, cerebrospinal fluid, scalp/bone and background) are utilized for simulation of grayscale data and imaging artifacts. For each class, Gaussian distribution of grayscale values is assumed, and mean and variance are computed from grayscale images. A random generator fills up the class images with Gauss-distributed grayscale values. Since grayscale values of neighboring voxels are not correlated, a Gaussian low-pass filtering is done, preserving class region borders. To simulate anatomical variability, a Gaussian distribution in space with user-defined mean and variance can be added at any user-defined position. Several imaging artifacts can be added: (1) to simulate partial volume effects, every voxel is averaged with neighboring voxels if they have a different class label; (2) a linear or quadratic bias field can be added with user-defined strength and orientation; (3) additional background noise can be added; and (4) artifacts left over after spoiling can be simulated by adding a band with increasing/decreasing grayscale values. With this method, realistic-looking simulated MRI images can be produced to test classification and segmentation algorithms regarding accuracy and robustness even in the presence of artifacts.

  2. A simple protocol for the probability weights of the simulated tempering algorithm: Applications to first-order phase transitions

    NASA Astrophysics Data System (ADS)

    Fiore, Carlos E.; da Luz, M. G. E.

    2010-12-01

    The simulated tempering (ST) is an important method to deal with systems whose phase spaces are hard to sample ergodically. However, it uses accepting probabilities weights, which often demand involving and time consuming calculations. Here it is shown that such weights are quite accurately obtained from the largest eigenvalue of the transfer matrix—a quantity straightforward to compute from direct Monte Carlo simulations—thus simplifying the algorithm implementation. As tests, different systems are considered, namely, Ising, Blume-Capel, Blume-Emery-Griffiths, and Bell-Lavis liquid water models. In particular, we address first-order phase transition at low temperatures, a regime notoriously difficulty to simulate because the large free-energy barriers. The good results found (when compared with other well established approaches) suggest that the ST can be a valuable tool to address strong first-order phase transitions, a possibility still not well explored in the literature.

  3. Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm

    NASA Technical Reports Server (NTRS)

    Kato, Hiromasa; Tannehill, John C.; Mehta, Unmeel B.

    2003-01-01

    A new parabolized Navier-Stokes (PNS) algorithm has been developed to efficiently compute magnetohydrodynamic (MHD) flows in the low magnetic Reynolds number regime. In this regime, the electrical conductivity is low and the induced magnetic field is negligible compared to the applied magnetic field. The MHD effects are modeled by introducing source terms into the PNS equation which can then be solved in a very efficient manner. To account for upstream (elliptic) effects, the flowfields are computed using multiple streamwise sweeps with an iterated PNS algorithm. Turbulence has been included by modifying the Baldwin-Lomax turbulence model to account for MHD effects. The new algorithm has been used to compute both laminar and turbulent, supersonic, MHD flows over flat plates and supersonic viscous flows in a rectangular MHD accelerator. The present results are in excellent agreement with previous complete Navier-Stokes calculations.

  4. First results from the COST-HOME monthly benchmark dataset with temperature and precipitation data for testing homogenisation algorithms

    NASA Astrophysics Data System (ADS)

    Venema, Victor; Mestre, Olivier

    2010-05-01

    As part of the COST Action HOME (Advances in homogenisation methods of climate series: an integrated approach) a dataset was generated that serves as a benchmark for homogenisation algorithms. Members of the Action and third parties have been invited to homogenise this dataset. The results of this exercise are analysed by the HOME Working Groups (WG) on detection (WG2) and correction (WG3) algorithms to obtain recommendations for a standard homogenisation procedure for climate data. This talk will shortly describe this benchmark dataset and present first results comparing the quality of the about 25 contributions. Based upon a survey among homogenisation experts we chose to work with monthly values for temperature and precipitation. Temperature and precipitation were selected because most participants consider these elements the most relevant for their studies. Furthermore, they represent two important types of statistics (additive and multiplicative). The benchmark has three different types of datasets: real data, surrogate data and synthetic data. The real datasets allow comparing the different homogenisation methods with the most realistic type of data and inhomogeneities. Thus this part of the benchmark is important for a faithful comparison of algorithms with each other. However, as in this case the truth is not known, it is not possible to quantify the improvements due to homogenisation. Therefore, the benchmark also has two datasets with artificial data to which we inserted known inhomogeneities: surrogate and synthetic data. The aim of surrogate data is to reproduce the structure of measured data accurately enough that it can be used as substitute for measurements. The surrogate climate networks have the spatial and temporal auto- and cross-correlation functions of real homogenised networks as well as the exact (non-Gaussian) distribution for each station. The idealised synthetic data is based on the surrogate networks. The change is that the difference

  5. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other. CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS over land, especially under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differences. Updates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  6. Results from CrIS/ATMS Obtained Using an AIRS "Version-6 Like" Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena

    2015-01-01

    We have tested and evaluated Version-6.22 AIRS and Version-6.22 CrIS products on a single day, December 4, 2013, and compared results to those derived using AIRS Version-6. AIRS and CrIS Version-6.22 O3(p) and q(p) products are both superior to those of AIRS Version-6All AIRS and CrIS products agree reasonably well with each other CrIS Version-6.22 T(p) and q(p) results are slightly poorer than AIRS under very cloudy conditions. Both AIRS and CrIS Version-6.22 run now at JPL. Our short term plans are to analyze many common months at JPL in the near future using Version-6.22 or a further improved algorithm to assess the compatibility of AIRS and CrIS monthly mean products and their interannual differencesUpdates to the calibration of both CrIS and ATMS are still being finalized. JPL plans, in collaboration with the Goddard DISC, to reprocess all AIRS data using a still to be finalized Version-7 retrieval algorithm, and to reprocess all recalibrated CrISATMS data using Version-7 as well.

  7. A Formal Algorithm for Verifying the Validity of Clustering Results Based on Model Checking

    PubMed Central

    Huang, Shaobin; Cheng, Yuan; Lang, Dapeng; Chi, Ronghua; Liu, Guofeng

    2014-01-01

    The limitations in general methods to evaluate clustering will remain difficult to overcome if verifying the clustering validity continues to be based on clustering results and evaluation index values. This study focuses on a clustering process to analyze crisp clustering validity. First, we define the properties that must be satisfied by valid clustering processes and model clustering processes based on program graphs and transition systems. We then recast the analysis of clustering validity as the problem of verifying whether the model of clustering processes satisfies the specified properties with model checking. That is, we try to build a bridge between clustering and model checking. Experiments on several datasets indicate the effectiveness and suitability of our algorithms. Compared with traditional evaluation indices, our formal method can not only indicate whether the clustering results are valid but, in the case the results are invalid, can also detect the objects that have led to the invalidity. PMID:24608823

  8. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  9. Simulation of rice plant temperatures using the UC Davis Advanced Canopy-Atmosphere-Soil Algorithm (ACASA)

    NASA Astrophysics Data System (ADS)

    Maruyama, A.; Pyles, D.; Paw U, K.

    2009-12-01

    The thermal environment in the plant canopy affects plants’ growth processes such as flowering and ripening. High temperatures often cause grain sterility and poor filling in serial crops, and reduce their production in tropical and temperate regions. With global warming predicted, these effects have become a major concern worldwide. In this study, we observed the plant body temperature profiles for the rice canopy and simulate them using a higher-order closure micrometeorological model to understand the relationship between plant temperatures and atmospheric condition. Experiments were conducted in rice paddy during 2007-summer season under warm temperate climate in Japan. Leaf temperatures at three different height (0.3, 0.5, 0.7m) and panicle temperatures at 0.9m were measured using fine-thermocouples. The UC Davis Advanced Canopy-Atmosphere-Soil Algorithm (ACASA) was used to calculate plant body temperature profiles in the canopy. ACASA is based on the radiation transfer, higher-order closure of turbulent equations for mass and heat exchange, and detailed plant physiological parameterization for the canopy-atmosphere-soil system. Water temperature was almost constant of 21-23 C throughout the summer because of continuous irrigation. Therefore, larger difference between air temperature at 2 m and water temperature was found on daytime. Observed leaf/panicle temperature was lower near the water surface and higher on upper layer in the canopy. Difference of temperatures between 0.3 m and 0.9 m was around 3-4 C for daytime, and around 1-2 C for nighttime. Calculated result of ACASA recreated these trends of plant temperature profile sufficiently. However, the relationship between plant and air temperature in the canopy was a little different from observed, i.e. observed leaf/panicle temperature were almost the same as air temperature, in contrast the simulated air temperature was 0.5-1.5 C higher than plant temperatures for the both of daytime and night time

  10. Quantum mechanical NMR simulation algorithm for protein-size spin systems

    NASA Astrophysics Data System (ADS)

    Edwards, Luke J.; Savostyanov, D. V.; Welderufael, Z. T.; Lee, Donghan; Kuprov, Ilya

    2014-06-01

    Nuclear magnetic resonance spectroscopy is one of the few remaining areas of physical chemistry for which polynomially scaling quantum mechanical simulation methods have not so far been available. In this communication we adapt the restricted state space approximation to protein NMR spectroscopy and illustrate its performance by simulating common 2D and 3D liquid state NMR experiments (including accurate description of relaxation processes using Bloch-Redfield-Wangsness theory) on isotopically enriched human ubiquitin - a protein containing over a thousand nuclear spins forming an irregular polycyclic three-dimensional coupling lattice. The algorithm uses careful tailoring of the density operator space to only include nuclear spin states that are populated to a significant extent. The reduced state space is generated by analysing spin connectivity and decoherence properties: rapidly relaxing states as well as correlations between topologically remote spins are dropped from the basis set.

  11. Simulation of Long Lived Tracers Using an Improved Empirically Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal

  12. A Multirate Variable-timestep Algorithm for N-body Solar System Simulations with Collisions

    NASA Astrophysics Data System (ADS)

    Sharp, P. W.; Newman, W. I.

    2016-03-01

    We present and analyze the performance of a new algorithm for performing accurate simulations of the solar system when collisions between massive bodies and test particles are permitted. The orbital motion of all bodies at all times is integrated using a high-order variable-timestep explicit Runge-Kutta Nyström (ERKN) method. The variation in the timestep ensures that the orbital motion of test particles on eccentric orbits or close to the Sun is calculated accurately. The test particles are divided into groups and each group is integrated using a different sequence of timesteps, giving a multirate algorithm. The ERKN method uses a high-order continuous approximation to the position and velocity when checking for collisions across a step. We give a summary of the extensive testing of our algorithm. In our largest simulation—that of the Sun, the planets Earth to Neptune and 100,000 test particles over 100 million years—the relative error in the energy after 100 million years was of the order of 10-11.

  13. Exploring Scheduling Algorithms and Analysis Tools for the LSST Operations Simulations

    NASA Astrophysics Data System (ADS)

    Petry, Catherine E.; Miller, M.; Cook, K. H.; Ridgway, S.; Chandrasekharan, S.; Jones, R. L.; Krughoff, K. S.; Ivezic, Z.; Krabbendam, V.

    2012-01-01

    The LSST Operations Simulator models the telescope's design-specific opto-mechanical system performance and site-specific conditions to simulate how observations may be obtained during a 10-year survey. We have found that a remarkable range of science programs are compatible with a single feasible cadence. The current version, OpSim v2.5, incorporates detailed models of the telescope and dome, the camera, weather and a more realistic model for scheduled and unscheduled downtime, as well as a scheduling strategy based on ranking requests for observations from a small number of observing modes attempting to optimize the key science objectives. Each observing mode is driven by a specific algorithm which ranks field-filter combinations of target fields to observe next. The output of the simulator is a detailed record of the activity of the telescope - such as position on the sky, slew activities, weather and various types of downtime - stored in a mySQL database. Sophisticated tools are required to mine this database in order to assess the degree of success of any simulated survey in some detail. An analysis pipeline has been created (SSTAR) which generates a standard report describing the basic characteristics of a simulated survey; a new analysis framework is being designed to allow for the inter-comparison of one or more simulated surveys and to perform more complex analyses in a pipeline fashion. Proprietary software is being used to interactively explore the database and to prototype reports for the new analysis pipeline, and we are working with the ASCOT team (http://ascot.astro.washington.edu) to determine the feasibility of creating our own interactive tools. The next phase of simulator development is being planned to include look-ahead to continue investigating the trade-offs of addressing multiple science goals within a single LSST survey.

  14. Simulation results of corkscrew motion in DARHT-II

    SciTech Connect

    Chan, K. D.; Ekdahl, C. A.; Chen, Y. J.; Hughes, T. P.

    2003-01-01

    DARHT-II, the second axis of the Dual-Axis Radiographic Hydrodynamics Test Facility, is being commissioned. DARHT-II is a linear induction accelerator producing 2-microsecond electron beam pulses at 20 MeV and 2 kA. These 2-microsecond pulses will be chopped into four short pulses to produce time resolved x-ray images. Radiographic application requires the DARHT-II beam to have excellent beam quality, and it is important to study various beam effects that may cause quality degradation of a DARHT-II beam. One of the beam dynamic effects under study is 'corkscrew' motion. For corkscrew motion, the beam centroid is deflected off axis due to misalignments of the solenoid magnets. The deflection depends on the beam energy variation, which is expected to vary by {+-}0.5% during the 'flat-top' part of a beam pulse. Such chromatic aberration will result in broadening of beam spot size. In this paper, we will report simulation results of our study of corkscrew motion in DARHT-II. Sensitivities of beam spot size to various accelerator parameters and the strategy for minimizing corkscrew motion will be described. Measured magnet misalignment is used in the simulation.

  15. Results from tight and loose coupled multiphysics in nuclear fuels performance simulations using BISON

    SciTech Connect

    Novascone, S. R.; Spencer, B. W.; Andrs, D.; Williamson, R. L.; Hales, J. D.; Perez, D. M.

    2013-07-01

    The behavior of nuclear fuel in the reactor environment is affected by multiple physics, most notably heat conduction and solid mechanics, which can have a strong influence on each other. To provide credible solutions, a fuel performance simulation code must have the ability to obtain solutions for each of the physics, including coupling between them. Solution strategies for solving systems of coupled equations can be categorized as loosely-coupled, where the individual physics are solved separately, keeping the solutions for the other physics fixed at each iteration, or tightly coupled, where the nonlinear solver simultaneously drives down the residual for each physics, taking into account the coupling between the physics in each nonlinear iteration. In this paper, we compare the performance of loosely and tightly coupled solution algorithms for thermomechanical problems involving coupled thermal and mechanical contact, which is a primary source of interdependence between thermal and mechanical solutions in fuel performance models. The results indicate that loosely-coupled simulations require significantly more nonlinear iterations, and may lead to convergence trouble when the thermal conductivity of the gap is too small. We also apply the tightly coupled solution strategy to a nuclear fuel simulation of an experiment in a test reactor. Studying the results from these simulations indicates that perhaps convergence for either approach may be problem dependent, i.e., there may be problems for which a loose coupled approach converges, where tightly coupled won't converge and vice versa. (authors)

  16. Results from Tight and Loose Coupled Multiphysics in Nuclear Fuels Performance Simulations using BISON

    SciTech Connect

    S. R. Novascone; B. W. Spencer; D. Andrs; R. L. Williamson; J. D. Hales; D. M. Perez

    2013-05-01

    The behavior of nuclear fuel in the reactor environment is affected by multiple physics, most notably heat conduction and solid mechanics, which can have a strong influence on each other. To provide credible solutions, a fuel performance simulation code must have the ability to obtain solutions for each of the physics, including coupling between them. Solution strategies for solving systems of coupled equations can be categorized as loosely-coupled, where the individual physics are solved separately, keeping the solutions for the other physics fixed at each iteration, or tightly coupled, where the nonlinear solver simultaneously drives down the residual for each physics, taking into account the coupling between the physics in each nonlinear iteration. In this paper, we compare the performance of loosely and tightly coupled solution algorithms for thermomechanical problems involving coupled thermal and mechanical contact, which is a primary source of interdependence between thermal and mechanical solutions in fuel performance models. The results indicate that loosely-coupled simulations require significantly more nonlinear iterations, and may lead to convergence trouble when the thermal conductivity of the gap is too small. We also apply the tightly coupled solution strategy to a nuclear fuel simulation of an experiment in a test reactor. Studying the results from these simulations indicates that perhaps convergence for either approach may be problem dependent, i.e., there may be problems for which a loose coupled approach converges, where tightly coupled won’t converge and vice versa.

  17. Linear-scaling source-sink algorithm for simulating time-resolved quantum transport and superconductivity

    NASA Astrophysics Data System (ADS)

    Weston, Joseph; Waintal, Xavier

    2016-04-01

    We report on a "source-sink" algorithm which allows one to calculate time-resolved physical quantities from a general nanoelectronic quantum system (described by an arbitrary time-dependent quadratic Hamiltonian) connected to infinite electrodes. Although mathematically equivalent to the nonequilibrium Green's function formalism, the approach is based on the scattering wave functions of the system. It amounts to solving a set of generalized Schrödinger equations that include an additional "source" term (coming from the time-dependent perturbation) and an absorbing "sink" term (the electrodes). The algorithm execution time scales linearly with both system size and simulation time, allowing one to simulate large systems (currently around 106 degrees of freedom) and/or large times (currently around 105 times the smallest time scale of the system). As an application we calculate the current-voltage characteristics of a Josephson junction for both short and long junctions, and recover the multiple Andreev reflection physics. We also discuss two intrinsically time-dependent situations: the relaxation time of a Josephson junction after a quench of the voltage bias, and the propagation of voltage pulses through a Josephson junction. In the case of a ballistic, long Josephson junction, we predict that a fast voltage pulse creates an oscillatory current whose frequency is controlled by the Thouless energy of the normal part. A similar effect is found for short junctions; a voltage pulse produces an oscillating current which, in the absence of electromagnetic environment, does not relax.

  18. An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models

    PubMed Central

    Wise, S.M.; Lowengrub, J.S.; Cristini, V.

    2010-01-01

    In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663

  19. An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models.

    PubMed

    Wise, S M; Lowengrub, J S; Cristini, V

    2011-01-01

    In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663

  20. Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak

    2010-01-01

    Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions

  1. Simulation of the Predictive Control Algorithm for Container Crane Operation using Matlab Fuzzy Logic Tool Box

    NASA Technical Reports Server (NTRS)

    Richardson, Albert O.

    1997-01-01

    This research has investigated the use of fuzzy logic, via the Matlab Fuzzy Logic Tool Box, to design optimized controller systems. The engineering system for which the controller was designed and simulate was the container crane. The fuzzy logic algorithm that was investigated was the 'predictive control' algorithm. The plant dynamics of the container crane is representative of many important systems including robotic arm movements. The container crane that was investigated had a trolley motor and hoist motor. Total distance to be traveled by the trolley was 15 meters. The obstruction height was 5 meters. Crane height was 17.8 meters. Trolley mass was 7500 kilograms. Load mass was 6450 kilograms. Maximum trolley and rope velocities were 1.25 meters per sec. and 0.3 meters per sec., respectively. The fuzzy logic approach allowed the inclusion, in the controller model, of performance indices that are more effectively defined in linguistic terms. These include 'safety' and 'cargo swaying'. Two fuzzy inference systems were implemented using the Matlab simulation package, namely the Mamdani system (which relates fuzzy input variables to fuzzy output variables), and the Sugeno system (which relates fuzzy input variables to crisp output variable). It is found that the Sugeno FIS is better suited to including aspects of those plant dynamics whose mathematical relationships can be determined.

  2. A study of the dosimetry of small field photon beams used in intensity-modulated radiation therapy in inhomogeneous media: Monte Carlo simulations and algorithm comparisons and corrections

    NASA Astrophysics Data System (ADS)

    Jones, Andrew Osler

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs further downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. Dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the lung

  3. Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle

    2013-01-01

    The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.

  4. A novel parallel-rotation algorithm for atomistic Monte Carlo simulation of dense polymer systems

    NASA Astrophysics Data System (ADS)

    Santos, S.; Suter, U. W.; Müller, M.; Nievergelt, J.

    2001-06-01

    We develop and test a new elementary Monte Carlo move for use in the off-lattice simulation of polymer systems. This novel Parallel-Rotation algorithm (ParRot) permits moving very efficiently torsion angles that are deeply inside long chains in melts. The parallel-rotation move is extremely simple and is also demonstrated to be computationally efficient and appropriate for Monte Carlo simulation. The ParRot move does not affect the orientation of those parts of the chain outside the moving unit. The move consists of a concerted rotation around four adjacent skeletal bonds. No assumption is made concerning the backbone geometry other than that bond lengths and bond angles are held constant during the elementary move. Properly weighted sampling techniques are needed for ensuring detailed balance because the new move involves a correlated change in four degrees of freedom along the chain backbone. The ParRot move is supplemented with the classical Metropolis Monte Carlo, the Continuum-Configurational-Bias, and Reptation techniques in an isothermal-isobaric Monte Carlo simulation of melts of short and long chains. Comparisons are made with the capabilities of other Monte Carlo techniques to move the torsion angles in the middle of the chains. We demonstrate that ParRot constitutes a highly promising Monte Carlo move for the treatment of long polymer chains in the off-lattice simulation of realistic models of dense polymer systems.

  5. Some results on ethnic conflicts based on evolutionary game simulation

    NASA Astrophysics Data System (ADS)

    Qin, Jun; Yi, Yunfei; Wu, Hongrun; Liu, Yuhang; Tong, Xiaonian; Zheng, Bojin

    2014-07-01

    The force of the ethnic separatism, essentially originating from the negative effect of ethnic identity, is damaging the stability and harmony of multiethnic countries. In order to eliminate the foundation of the ethnic separatism and set up a harmonious ethnic relationship, some scholars have proposed a viewpoint: ethnic harmony could be promoted by popularizing civic identity. However, this viewpoint is discussed only from a philosophical prospective and still lacks support of scientific evidences. Because ethnic group and ethnic identity are products of evolution and ethnic identity is the parochialism strategy under the perspective of game theory, this paper proposes an evolutionary game simulation model to study the relationship between civic identity and ethnic conflict based on evolutionary game theory. The simulation results indicate that: (1) the ratio of individuals with civic identity has a negative association with the frequency of ethnic conflicts; (2) ethnic conflict will not die out by killing all ethnic members once for all, and it also cannot be reduced by a forcible pressure, i.e., increasing the ratio of individuals with civic identity; (3) the average frequencies of conflicts can stay in a low level by promoting civic identity periodically and persistently.

  6. HOMs simulation and measurement results of IHEP02 cavity

    NASA Astrophysics Data System (ADS)

    Zheng, Hong-Juan; Zhai, Ji-Yuan; Zhao, Tong-Xian; Gao, Jie

    2015-11-01

    In accelerator RF cavities, there exists not only the fundamental mode which is used to accelerate the beam, but also higher order modes (HOMs). The higher order modes excited by the beam can seriously affect beam quality, especially for the higher R/Q modes. 1.3 GHz low-loss 9-cell superconducting cavity as a candidate for ILC high gradient cavity, the properties of higher order mode has not been studied carefully. IHEP based on existing low loss cavity, designed and developed a large grain size 1.3 GHz low-loss 9-cell superconducting cavity (IHEP02 cavity). The higher order mode coupler of IHEP02 used TESLA coupler's design. As a result of the limitation of the mechanical design, the distance between higher order mode coupler and end cell is larger than TESLA cavity. This paper reports on measured results of higher order modes in the IHEP02 1.3 GHz low-loss 9-cell superconducting cavity. Using different methods, Qe of the dangerous modes passbands have been obtained. The results are compared with TESLA cavity results. R/Q of the first three passbands have also been obtained by simulation and compared with the results of the TESLA cavity. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences

  7. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations.

    PubMed

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya

    2014-05-14

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10(6)-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of

  8. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya

    2014-05-01

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 106-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques

  9. Development of a computer algorithm for the detection of phase singularities and initial application to analyze simulations of atrial fibrillation.

    PubMed

    Zou, Renqiang; Kneller, James; Leon, L. Joshua; Nattel, Stanley

    2002-09-01

    Atrial fibrillation (AF) is a common cardiac arrhythmia, but its mechanisms are incompletely understood. The identification of phase singularities (PSs) has been used to define spiral waves involved in maintaining the arrhythmia, as well as daughter wavelets. In the past, PSs have often been identified manually. Automated PS detection algorithms have been described previously, but when we attempted to apply a previously developed algorithm we experienced problems with false positives that made the results difficult to use directly. We therefore developed a tool for PS identification that uses multiple strategies incorporating both image analysis and mathematical convolution for automated detection with optimized sensitivity and specificity, followed by manual verification. The tool was then applied to analyze PS behavior in simulations of AF maintained in the presence of spatially distributed acetylcholine effects in cell grids of varying size. These analyses indicated that in almost all cases, a single PS lasted throughout the simulation, corresponding to the central-core tip of a single spiral wave that maintained AF. The sustained PS always localized to an area of low acetylcholine concentration. When the grid became very small and no area of low acetylcholine concentration was surrounded by zones of higher concentration, AF could not be sustained. The behavior of PSs and the mechanisms of AF were qualitatively constant over an 11.1-fold range of atrial grid size, suggesting that the classical emphasis on tissue size as a primary determinant of fibrillatory behavior may be overstated. (c) 2002 American Institute of Physics. PMID:12779605

  10. Worm algorithm and diagrammatic Monte Carlo: A new approach to continuous-space path integral Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Boninsegni, M.; Prokof'Ev, N. V.; Svistunov, B. V.

    2006-09-01

    A detailed description is provided of a new worm algorithm, enabling the accurate computation of thermodynamic properties of quantum many-body systems in continuous space, at finite temperature. The algorithm is formulated within the general path integral Monte Carlo (PIMC) scheme, but also allows one to perform quantum simulations in the grand canonical ensemble, as well as to compute off-diagonal imaginary-time correlation functions, such as the Matsubara Green function, simultaneously with diagonal observables. Another important innovation consists of the expansion of the attractive part of the pairwise potential energy into elementary (diagrammatic) contributions, which are then statistically sampled. This affords a complete microscopic account of the long-range part of the potential energy, while keeping the computational complexity of all updates independent of the size of the simulated system. The computational scheme allows for efficient calculations of the superfluid fraction and off-diagonal correlations in space-time, for system sizes which are orders of magnitude larger than those accessible to conventional PIMC. We present illustrative results for the superfluid transition in bulk liquid He4 in two and three dimensions, as well as the calculation of the chemical potential of hcp He4 .

  11. SLAC E144 Plots, Simulation Results, and Data

    DOE Data Explorer

    The 1997 E144 experiments at the Stanford Linear Accelerator Center (SLAC) utilitized extremely high laser intensities and collided huge groups of photons together so violently that positron-electron pairs were briefly created, actual particles of matter and antimatter. Instead of matter exploding into heat and light, light actually become matter. That accomplishment opened a new path into the exploration of the interactions of electrons and photons or quantum electrodynamics (QED). The E144 information at this website includes Feynmann Diagrams, simulation results, and data files. See also aseries of frames showing the E144 laser colliding with a beam electron and producing an electron-positron pair at http://www.slac.stanford.edu/exp/e144/focpic/focpic.html and lists of collaborators' papers, theses, and a page of press articles.

  12. Governance of complex systems: results of a sociological simulation experiment.

    PubMed

    Adelt, Fabian; Weyer, Johannes; Fink, Robin D

    2014-01-01

    Social sciences have discussed the governance of complex systems for a long time. The following paper tackles the issue by means of experimental sociology, in order to investigate the performance of different modes of governance empirically. The simulation framework developed is based on Esser's model of sociological explanation as well as on Kroneberg's model of frame selection. The performance of governance has been measured by means of three macro and two micro indicators. Surprisingly, central control mostly performs better than decentralised coordination. However, results not only depend on the mode of governance, but there is also a relation between performance and the composition of actor populations, which has yet not been investigated sufficiently. Practitioner Summary: Practitioners can gain insights into the functioning of complex systems and learn how to better manage them. Additionally, they are provided with indicators to measure the performance of complex systems. PMID:24456093

  13. Induced current electrical impedance tomography system: experimental results and numerical simulations.

    PubMed

    Zlochiver, Sharon; Radai, M Michal; Abboud, Shimon; Rosenfeld, Moshe; Dong, Xiu-Zhen; Liu, Rui-Gang; You, Fu-Sheng; Xiang, Hai-Yan; Shi, Xue-Tao

    2004-02-01

    In electrical impedance tomography (EIT), measurements of developed surface potentials due to applied currents are used for the reconstruction of the conductivity distribution. Practical implementation of EIT systems is known to be problematic due to the high sensitivity to noise of such systems, leading to a poor imaging quality. In the present study, the performance of an induced current EIT (ICEIT) system, where eddy current is applied using magnetic induction, was studied by comparing the voltage measurements to simulated data, and examining the imaging quality with respect to simulated reconstructions for several phantom configurations. A 3-coil, 32-electrode ICEIT system was built, and an iterative modified Newton-Raphson algorithm was developed for the solution of the inverse problem. The RMS norm between the simulated and the experimental voltages was found to be 0.08 +/- 0.05 mV (<3%). Two regularization methods were implemented and compared: the Marquardt regularization and the Laplacian regularization (a bounded second-derivative regularization). While the Laplacian regularization method was found to be preferred for simulated data, it resulted in distinctive spatial artifacts for measured data. The experimental reconstructed images were found to be indicative of the angular positioning of the conductivity perturbations, though the radial sensitivity was low, especially when using the Marquardt regularization method. PMID:15005319

  14. Development of region processing algorithm for HSTAMIDS: status and field test results

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, K. C.; Bartosz, Elizabeth; Duvoisin, Herbert

    2007-04-01

    The Region Processing Algorithm (RPA) has been developed by the Office of the Army Humanitarian Demining Research and Development (HD R&D) Program as part of improvements for the AN/PSS-14. The effort was a collaboration between the HD R&D Program, L-3 Communication CyTerra Corporation, University of Florida, Duke University and University of Missouri. RPA has been integrated into and implemented in a real-time AN/PSS-14. The subject unit was used to collect data and tested for its performance at three Army test sites within the United States of America. This paper describes the status of the technology and its recent test results.

  15. Improving Electron Transport Simulation in Mesoscopic Systems by Coupling a Classical Monte Carlo Algorithm to a Wigner Function Solver

    NASA Astrophysics Data System (ADS)

    García-García, J.; Martín, F.; Oriols, X.; Suñé, J.

    Because of its high switching speed, low power consumption and reduced complexity to implement a given function, resonant tunneling diodes (RTD's) have been recently recognized as excellent candidates for digital circuit applications [1]. Device modeling and simulation is thus important, not only to understand mesoscopic transport properties, but also to provide guidance in optimal device design and fabrication. Several approaches have been used to this end. Among kinetic models, those based on the non-equilibrium Green function formalism [2] have gained increasing interest due to their ability to incorporate coherent and incoherent interactions in a unified formulation. The Wigner distribution function approach has been also extensively used to study quantum transport in RTD's [3-6]. The main limitations of this formulation are the semiclassical treatment of carrier-phonon interactions by means of the relaxation time approximation and the huge computational burden associated to the self-consistent solution of Liouville and Poisson equations. This has imposed severe limitations on spatial domains, these being too small to succeed in the development of reliable simulation tools. Based on the Wigner function approach, we have developed a simulation tool that allows to extend the simulation domains up to hundreds of nanometers without a significant increase in computer time [7]. This tool is based on the coupling between the Wigner distribution function (quantum Liouville equation) and the Boltzmann transport equation. The former is applied to the active region of the device including the double barrier, where quantum effects are present (quantum window, QW). The latter is solved by means of a Monte Carlo algorithm and applied to the outer regions of the device, where quantum effects are not expected to occur. Since the classical Monte Carlo algorithm is much less time consuming than the discretized version of the Wigner transport equation, we can considerably

  16. Results from CrIS/ATMS Obtained Using an "AIRS Version-6 Like Retrieval Algorithm

    NASA Astrophysics Data System (ADS)

    Susskind, J.

    2015-12-01

    A main objective of AIRS/AMSU on EOS is to provide accurate sounding products that are used to generate climate data sets. Suomi NPP carries CrIS/ATMS that were designed as follow-ons to AIRS/AMSU. Our objective is to generate a long term climate data set of products derived from CrIS/ATMS to serve as a continuation of the AIRS/AMSU products. The Goddard DISC has generated AIRS/AMSU retrieval products, extending from September 2002 through real time, using the AIRS Science Team Version-6 retrieval algorithm. Level-3 gridded monthly mean values of these products, generated using AIRS Version-6, form a state of the art multi-year set of Climate Data Records (CDRs), which is expected to continue through 2022 and possibly beyond, as the AIRS instrument is extremely stable. The goal of this research is to develop and implement a CrIS/ATMS retrieval system to generate CDRs that are compatible with, and are of comparable quality to, those generated operationally using AIRS/AMSU data. The AIRS Science Team has made considerable improvements in AIRS Science Team retrieval methodology and is working on the development of an improved AIRS Science Team Version-7 retrieval methodology to be used to reprocess all AIRS data in the relatively near future. Research is underway by Dr. Susskind and co-workers at the NASA GSFC Sounder Research Team (SRT) towards the finalization of the AIRS Version-7 retrieval algorithm, the current version of which is called SRT AIRS Version-6.22. Dr. Susskind and co-workers have developed analogous retrieval methodology for analysis of CrIS/ATMS data, called SRT CrIS Version-6.22. Results will be presented that show that AIRS and CrIS products derived using a common further improved retrieval algorithm agree closely with each other and are both superior to AIRS Version 6. The goal of the AIRS Science Team is to continue to improve both AIRS and CrIS retrieval products and then use the improved retrieval methodology for the processing of past and

  17. Mid-Holocene permafrost: Results from CMIP5 simulations

    NASA Astrophysics Data System (ADS)

    Liu, Yeyi; Jiang, Dabang

    2016-01-01

    Distribution of frozen ground and active layer thickness in the Northern Hemisphere during the mid-Holocene (MH) and differences with respect to the preindustrial (PI) were investigated here using the Coupled Model Intercomparison Project Phase 5 (CMIP5) models. Two typical diagnostic methods, respectively, based on soil temperature (Ts based; a direct method) and air temperature (Ta based; an indirect method) were employed to classify categories and extents of frozen ground. In relation to orbitally induced changes in climate and in turn freezing and thawing indices, the MH permafrost extent was 20.5% (1.8%) smaller than the PI, whereas seasonally frozen ground increased by 9.2% (0.8%) in the Northern Hemisphere according to the Ts-based (Ta-based) method. Active layer thickness became larger, but by ≤ 1.0 m in most of permafrost areas during the MH. Intermodel disagreement remains within areas of permafrost boundary by both the Ts-based and Ta-based results, with the former demonstrating less agreement among the CMIP5 models because of larger variation in land model abilities to represent permafrost processes. However, both the methods were able to reproduce the MH relatively degenerated permafrost and increased active layer thickness (although with smaller magnitudes) as observed in data reconstruction. Disparity between simulation and reconstruction was mainly found in the seasonally frozen ground regions at low to middle latitudes, where the reconstruction suggested a reduction of seasonally frozen ground extent to the north, whereas the simulation demonstrated a slightly expansion to the south for the MH compared to the PI.

  18. 3D radiative transfer in colliding wind binaries: Application of the SimpleX algorithm to 3D SPH simulations

    NASA Astrophysics Data System (ADS)

    Madura, Thomas; Clementel, Nicola; Kruip, Chael; Icke, Vincent; Gull, Theodore

    2014-09-01

    We present the first results of full 3D radiative transfer simulations of the colliding stellar winds in a massive binary system. We accomplish this by applying the SIMPLEX algorithm for 3D radiative transfer on an unstructured Delaunay grid to recent 3D smoothed particle hydrodynamics (SPH) simulations of the colliding winds in the binary system η Carinae. We use SIMPLEX to obtain detailed ionization fractions of hydrogen and helium, in 3D, at the resolution of the original SPH simulations. We show how the SIMPLEX simulations can be used to generate synthetic spectral data cubes for comparison to data obtained with the Hubble Space Telescope (HST)/Space Telescope Imaging Spectrograph as part of a multi-cycle program to map changes in η Car's extended interacting wind structures across one binary cycle. Comparison of the HST observations to the SIMPLEX models can help lead to more accurate constraints on the orbital, stellar, and wind parameters of the η Car system, such as the primary's mass-loss rate and the companion's temperature and luminosity. While we initially focus specifically on the η Car binary, the numerical methods employed can be applied to numerous other colliding wind (WR140, WR137, WR19) and dusty 'pinwheel' (WR104, WR98a) binary systems. One of the biggest remaining mysteries is how dust can form and survive in such systems that contain a hot, luminous O star. Coupled with 3D hydrodynamical simulations, SIMPLEX simulations have the potential to help determine the regions where dust can form and survive in these unique objects.

  19. Simulating Visual Learning and Optical Illusions via a Network-Based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Siu, Theodore; Vivar, Miguel; Shinbrot, Troy

    We present a neural network model that uses a genetic algorithm to identify spatial patterns. We show that the model both learns and reproduces common visual patterns and optical illusions. Surprisingly, we find that the illusions generated are a direct consequence of the network architecture used. We discuss the implications of our results and the insights that we gain on how humans fall for optical illusions

  20. Dissipative Particle Dynamics Simulations at Extreme Scale: GPU Algorithms, Implementation and Applications

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Karniadakis, George; Crunch Team

    2014-03-01

    We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to illustrate the practicality of our code in real-world applications. This work was supported by the new Department of Energy Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4). Simulations were carried out at the Oak Ridge Leadership Computing Facility through the INCITE program under project BIP017.

  1. One-year results of an algorithmic approach to managing failed back surgery syndrome

    PubMed Central

    Avellanal, Martín; Diaz-Reganon, Gonzalo; Orts, Alejandro; Soto, Silvia

    2014-01-01

    BACKGROUND: Failed back surgery syndrome (FBSS) is a major clinical problem. Different etiologies with different incidence rates have been proposed. There are currently no standards regarding the management of these patients. Epiduroscopy is an endoscopic technique that may play a role in the management of FBSS. OBJECTIVE: To evaluate an algorithm for management of severe FBSS including epiduroscopy as a diagnostic and therapeutic tool. METHODS: A total of 133 patients with severe symptoms of FBSS (visual analogue scale score ≥7) and no response to pharmacological treatment and physical therapy were included. A six-step management algorithm was applied. Data, including patient demographics, pain and surgical procedure, were analyzed. In all cases, one or more objective causes of pain were established. Treatment success was defined as ≥50% long-term pain relief maintained during the first year of follow-up. Final allocation of patients was registered: good outcome with conservative treatment, surgical reintervention and palliative treatment with implantable devices. RESULTS: Of 122 patients enrolled, 59.84% underwent instrumented surgery and 40.16% a noninstrumented procedure. Most (64.75%) experienced significant pain relief with conventional pain clinic treatments; 15.57% required surgical treatment. Palliative spinal cord stimulation and spinal analgesia were applied in 9.84% and 2.46% of the cases, respectively. The most common diagnosis was epidural fibrosis, followed by disc herniation, global or lateral stenosis, and foraminal stenosis. CONCLUSIONS: A new six-step ladder approach to severe FBSS management that includes epiduroscopy was analyzed. Etiologies are accurately described and a useful role of epiduroscopy was confirmed. PMID:25222573

  2. Simulated Performance of Algorithms for the Localization of Radioactive Sources from a Position Sensitive Radiation Detecting System (COCAE)

    SciTech Connect

    Karafasoulis, K.; Zachariadou, K.; Seferlis, S.; Kaissas, I.; Potiriadis, C.; Lambropoulos, C.; Loukas, D.

    2011-12-13

    Simulation studies are presented regarding the performance of algorithms that localize point-like radioactive sources detected by a position sensitive portable radiation instrument (COCAE). The source direction is estimated by using the List Mode Maximum Likelihood Expectation Maximization (LM-ML-EM) imaging algorithm. Furthermore, the source-to-detector distance is evaluated by three different algorithms based on the photo-peak count information of each detecting layer, the quality of the reconstructed source image, and the triangulation method. These algorithms have been tested on a large number of simulated photons over a wide energy range (from 200 keV to 2 MeV) emitted by point-like radioactive sources located at different orientations and source-to-detector distances.

  3. Molecular simulation of aqueous electrolytes: Water chemical potential results and Gibbs-Duhem equation consistency tests

    NASA Astrophysics Data System (ADS)

    Moučka, Filip; Nezbeda, Ivo; Smith, William R.

    2013-09-01

    This paper deals with molecular simulation of the chemical potentials in aqueous electrolyte solutions for the water solvent and its relationship to chemical potential simulation results for the electrolyte solute. We use the Gibbs-Duhem equation linking the concentration dependence of these quantities to test the thermodynamic consistency of separate calculations of each quantity. We consider aqueous NaCl solutions at ambient conditions, using the standard SPC/E force field for water and the Joung-Cheatham force field for the electrolyte. We calculate the water chemical potential using the osmotic ensemble Monte Carlo algorithm by varying the number of water molecules at a constant amount of solute. We demonstrate numerical consistency of these results in terms of the Gibbs-Duhem equation in conjunction with our previous calculations of the electrolyte chemical potential. We present the chemical potential vs molality curves for both solvent and solute in the form of appropriately chosen analytical equations fitted to the simulation data. As a byproduct, in the context of the force fields considered, we also obtain values for the Henry convention standard molar chemical potential for aqueous NaCl using molality as the concentration variable and for the chemical potential of pure SPC/E water. These values are in reasonable agreement with the experimental values.

  4. Residual Elimination Algorithm Enhancements to Improve Foot Motion Tracking During Forward Dynamic Simulations of Gait.

    PubMed

    Jackson, Jennifer N; Hass, Chris J; Fregly, Benjamin J

    2015-11-01

    Patient-specific gait optimizations capable of predicting post-treatment changes in joint motions and loads could improve treatment design for gait-related disorders. To maximize potential clinical utility, such optimizations should utilize full-body three-dimensional patient-specific musculoskeletal models, generate dynamically consistent gait motions that reproduce pretreatment marker measurements closely, and achieve accurate foot motion tracking to permit deformable foot-ground contact modeling. This study enhances an existing residual elimination algorithm (REA) Remy, C. D., and Thelen, D. G., 2009, “Optimal Estimation of Dynamically Consistent Kinematics and Kinetics for Forward Dynamic Simulation of Gait,” ASME J. Biomech. Eng., 131(3), p. 031005) to achieve all three requirements within a single gait optimization framework. We investigated four primary enhancements to the original REA: (1) manual modification of tracked marker weights, (2) automatic modification of tracked joint acceleration curves, (3) automatic modification of algorithm feedback gains, and (4) automatic calibration of model joint and inertial parameter values. We evaluated the enhanced REA using a full-body three-dimensional dynamic skeletal model and movement data collected from a subject who performed four distinct gait patterns: walking, marching, running, and bounding. When all four enhancements were implemented together, the enhanced REA achieved dynamic consistency with lower marker tracking errors for all segments, especially the feet (mean root-mean-square (RMS) errors of 3.1 versus 18.4 mm), compared to the original REA. When the enhancements were implemented separately and in combinations, the most important one was automatic modification of tracked joint acceleration curves, while the least important enhancement was automatic modification of algorithm feedback gains. The enhanced REA provides a framework for future gait optimization studies that seek to predict subject

  5. Planning image-guided endovascular interventions: guidewire simulation using shortest path algorithms

    NASA Astrophysics Data System (ADS)

    Schafer, Sebastian; Singh, Vikas; Hoffmann, Kenneth R.; Noël, Peter B.; Xu, Jinhui

    2007-03-01

    Endovascular interventional procedures are being used more frequently in cardiovascular surgery. Unfortunately, procedural failure, e.g., vessel dissection, may occur and is often related to improper guidewire and/or device selection. To support the surgeon's decision process and because of the importance of the guidewire in positioning devices, we propose a method to determine the guidewire path prior to insertion using a model of its elastic potential energy coupled with a representative graph construction. The 3D vessel centerline and sizes are determined for a specified vessel. Points in planes perpendicular to the vessel centerline are generated. For each pair of consecutive planes, a vector set is generated which joins all points in these planes. We construct a graph representing these vector sets as nodes. The nodes representing adjacent vector sets are joined by edges with weights calculated as a function of the angle between the corresponding vectors (nodes). The optimal path through this weighted directed graph is then determined using shortest path algorithms, such as topological sort based shortest path algorithm or Dijkstra's algorithm. Volumetric data of an internal carotid artery phantom (Ø 3.5mm) were acquired. Several independent guidewire (Ø 0.4mm) placements were performed, and the 3D paths were determined using rotational angiography. The average RMS distance between the actual and the average simulated guidewire path was 0.7mm; the computation time to determine the path was 3 seconds. The ability to predict the guidewire path inside vessels may facilitate calculation of vessel-branch access and force estimation on devices and the vessel wall.

  6. PedMine – A simulated annealing algorithm to identify maximally unrelated individuals in population isolates

    PubMed Central

    Douglas, Julie A.; Sandefur, Conner I.

    2010-01-01

    Summary In family-based genetic studies, it is often useful to identify a subset of unrelated individuals. When such studies are conducted in population isolates, however, most if not all individuals are often detectably related to each other. To identify a set of maximally unrelated (or equivalently, minimally related) individuals, we have implemented simulated annealing, a general-purpose algorithm for solving difficult combinatorial optimization problems. We illustrate our method on data from a genetic study in the Old Order Amish of Lancaster County, Pennsylvania, a population isolate derived from a modest number of founders. Given one or more pedigrees, our program automatically and rapidly extracts a fixed number of maximally unrelated individuals. PMID:18321883

  7. A TR-induced algorithm for hot spots elimination through CT-scan HIFU simulations

    NASA Astrophysics Data System (ADS)

    Leduc, Nicolas; Okita, Kohei; Sugiyama, Kazuyasu; Takagi, Shu; Matsumoto, Yoichiro

    2011-09-01

    Although nowadays widely spread for imaging and treatments uses, HIFU techniques are still limited by the distortion of the wavefront due to refraction and reflection on the inhomogeneous media inside the human body. CT-scan Time Reversal (TR) procedure has risen as a promising candidate for focus control. A finite difference time domain parallelized code is used to provide simulations of TR-enhanced propagation through elements of the human body and implement a simple algorithm to address the issue of grating lobes, i.e secondary peaks of pressure due to natural diffraction by phased arrays and enhanced by medium heterogeneity. Using an iterative, progressive process combining secondary sound sources and independent signal summation, the primary peak is strengthened while secondary peaks are increasingly obliterated. This method supports the feasibility of precise modification and enhancement of the pressure profile in the targeted area through Time Reversal based solutions.

  8. Non-equilibrium molecular dynamics simulation of nanojet injection with adaptive-spatial decomposition parallel algorithm.

    PubMed

    Shin, Hyun-Ho; Yoon, Woong-Sup

    2008-07-01

    An Adaptive-Spatial Decomposition parallel algorithm was developed to increase computation efficiency for molecular dynamics simulations of nano-fluids. Injection of a liquid argon jet with a scale of 17.6 molecular diameters was investigated. A solid annular platinum injector was also solved simultaneously with the liquid injectant by adopting a solid modeling technique which incorporates phantom atoms. The viscous heat was naturally discharged through the solids so the liquid boiling problem was avoided with no separate use of temperature controlling methods. Parametric investigations of injection speed, wall temperature, and injector length were made. A sudden pressure drop at the orifice exit causes flash boiling of the liquid departing the nozzle exit with strong evaporation on the surface of the liquids, while rendering a slender jet. The elevation of the injection speed and the wall temperature causes an activation of the surface evaporation concurrent with reduction in the jet breakup length and the drop size. PMID:19051924

  9. Computer simulation and evaluation of edge detection algorithms and their application to automatic path selection

    NASA Technical Reports Server (NTRS)

    Longendorfer, B. A.

    1976-01-01

    The construction of an autonomous roving vehicle requires the development of complex data-acquisition and processing systems, which determine the path along which the vehicle travels. Thus, a vehicle must possess algorithms which can (1) reliably detect obstacles by processing sensor data, (2) maintain a constantly updated model of its surroundings, and (3) direct its immediate actions to further a long range plan. The first function consisted of obstacle recognition. Obstacles may be identified by the use of edge detection techniques. Therefore, the Kalman Filter was implemented as part of a large scale computer simulation of the Mars Rover. The second function consisted of modeling the environment. The obstacle must be reconstructed from its edges, and the vast amount of data must be organized in a readily retrievable form. Therefore, a Terrain Modeller was developed which assembled and maintained a rectangular grid map of the planet. The third function consisted of directing the vehicle's actions.

  10. Merging tree ring chronologies and climate system model simulated temperature by optimal interpolation algorithm in North America

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Xing, Pei; Luo, Yong; Zhao, Zongci; Nie, Suping; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua

    2015-04-01

    A new dataset of annual mean surface temperature has been constructed over North America in recent 500 years by performing optimal interpolation (OI) algorithm. Totally, 149 series totally were screened out including 69 tree ring width (MXD) and 80 tree ring width (TRW) chronologies are screened from International Tree Ring Data Bank (ITRDB). The simulated annual mean surface temperature derives from the past1000 experiment results of Community Climate System Model version 4 (CCSM4). Different from existing research that applying data assimilation approach to (General Circulation Models) GCMs simulation, the errors of both the climate model simulation and tree ring reconstruction were considered, with a view to combining the two parts in an optimal way. Variance matching (VM) was employed to calibrate tree ring chronologies on CRUTEM4v, and corresponding errors were estimated through leave-one-out process. Background error covariance matrix was estimated from samples of simulation results in a running 30-year window in a statistical way. Actually, the background error covariance matrix was calculated locally within the scanning range (2000km in this research). Thus, the merging process continued with a time-varying local gain matrix. The merging method (MM) was tested by two kinds of experiments, and the results indicated standard deviation of errors can be reduced by about 0.3 degree centigrade lower than tree ring reconstructions and 0.5 degree centigrade lower than model simulation. During the recent Obvious decadal variability can be identified in MM results including the evident cooling (0.10 degree per decade) in 1940-60s, wherein the model simulation exhibit a weak increasing trend (0.05 degree per decade) instead. MM results revealed a compromised spatial pattern of the linear trend of surface temperature during a typical period (1601-1800 AD) in Little Ice Age, which basically accorded with the phase transitions of the Pacific decadal oscillation (PDO) and

  11. The Ground Flash Fraction Retrieval Algorithm Employing Differential Evolution: Simulations and Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard

    2012-01-01

    The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error

  12. Time-step Considerations in Particle Simulation Algorithms for Coulomb Collisions in Plasmas

    SciTech Connect

    Cohen, B I; Dimits, A; Friedman, A; Caflisch, R

    2009-10-29

    The accuracy of first-order Euler and higher-order time-integration algorithms for grid-based Langevin equations collision models in a specific relaxation test problem is assessed. We show that statistical noise errors can overshadow time-step errors and argue that statistical noise errors can be conflated with time-step effects. Using a higher-order integration scheme may not achieve any benefit in accuracy for examples of practical interest. We also investigate the collisional relaxation of an initial electron-ion relative drift and the collisional relaxation to a resistive steady-state in which a quasi-steady current is driven by a constant applied electric field, as functions of the time step used to resolve the collision processes using binary and grid-based, test-particle Langevin equations models. We compare results from two grid-based Langevin equations collision algorithms to results from a binary collision algorithm for modeling electronion collisions. Some guidance is provided regarding how large a time step can be used compared to the inverse of the characteristic collision frequency for specific relaxation processes.

  13. Dynamic simulation of concentrated macromolecular solutions with screened long-range hydrodynamic interactions: Algorithm and limitations

    PubMed Central

    Ando, Tadashi; Chow, Edmond; Skolnick, Jeffrey

    2013-01-01

    Hydrodynamic interactions exert a critical effect on the dynamics of macromolecules. As the concentration of macromolecules increases, by analogy to the behavior of semidilute polymer solutions or the flow in porous media, one might expect hydrodynamic screening to occur. Hydrodynamic screening would have implications both for the understanding of macromolecular dynamics as well as practical implications for the simulation of concentrated macromolecular solutions, e.g., in cells. Stokesian dynamics (SD) is one of the most accurate methods for simulating the motions of N particles suspended in a viscous fluid at low Reynolds number, in that it considers both far-field and near-field hydrodynamic interactions. This algorithm traditionally involves an O(N3) operation to compute Brownian forces at each time step, although asymptotically faster but more complex SD methods are now available. Motivated by the idea of hydrodynamic screening, the far-field part of the hydrodynamic matrix in SD may be approximated by a diagonal matrix, which is equivalent to assuming that long range hydrodynamic interactions are completely screened. This approximation allows sparse matrix methods to be used, which can reduce the apparent computational scaling to O(N). Previously there were several simulation studies using this approximation for monodisperse suspensions. Here, we employ newly designed preconditioned iterative methods for both the computation of Brownian forces and the solution of linear systems, and consider the validity of this approximation in polydisperse suspensions. We evaluate the accuracy of the diagonal approximation method using an intracellular-like suspension. The diffusivities of particles obtained with this approximation are close to those with the original method. However, this approximation underestimates intermolecular correlated motions, which is a trade-off between accuracy and computing efficiency. The new method makes it possible to perform large-scale and

  14. Dynamic simulation of concentrated macromolecular solutions with screened long-range hydrodynamic interactions: Algorithm and limitations

    NASA Astrophysics Data System (ADS)

    Ando, Tadashi; Chow, Edmond; Skolnick, Jeffrey

    2013-09-01

    Hydrodynamic interactions exert a critical effect on the dynamics of macromolecules. As the concentration of macromolecules increases, by analogy to the behavior of semidilute polymer solutions or the flow in porous media, one might expect hydrodynamic screening to occur. Hydrodynamic screening would have implications both for the understanding of macromolecular dynamics as well as practical implications for the simulation of concentrated macromolecular solutions, e.g., in cells. Stokesian dynamics (SD) is one of the most accurate methods for simulating the motions of N particles suspended in a viscous fluid at low Reynolds number, in that it considers both far-field and near-field hydrodynamic interactions. This algorithm traditionally involves an O(N3) operation to compute Brownian forces at each time step, although asymptotically faster but more complex SD methods are now available. Motivated by the idea of hydrodynamic screening, the far-field part of the hydrodynamic matrix in SD may be approximated by a diagonal matrix, which is equivalent to assuming that long range hydrodynamic interactions are completely screened. This approximation allows sparse matrix methods to be used, which can reduce the apparent computational scaling to O(N). Previously there were several simulation studies using this approximation for monodisperse suspensions. Here, we employ newly designed preconditioned iterative methods for both the computation of Brownian forces and the solution of linear systems, and consider the validity of this approximation in polydisperse suspensions. We evaluate the accuracy of the diagonal approximation method using an intracellular-like suspension. The diffusivities of particles obtained with this approximation are close to those with the original method. However, this approximation underestimates intermolecular correlated motions, which is a trade-off between accuracy and computing efficiency. The new method makes it possible to perform large-scale and

  15. An accelerated algorithm for discrete stochastic simulation of reaction–diffusion systems using gradient-based diffusion and tau-leaping

    PubMed Central

    Koh, Wonryull; Blackwell, Kim T.

    2011-01-01

    Stochastic simulation of reaction–diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction–diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies. PMID:21513371

  16. Airborne ICESat-2 simulator (MABEL) results from Greenland

    NASA Astrophysics Data System (ADS)

    Neumann, T.; Markus, T.; Brunt, K. M.; Walsh, K.; Hancock, D.; Cook, W. B.; Brenner, A. C.; Csatho, B. M.; De Marco, E.

    2012-12-01

    The Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) is a next-generation laser altimeter designed to continue key observations of sea ice freeboard, ice sheet elevation change, vegetation canopy height, earth surface elevation and sea surface heights. Scheduled for launch in mid-2016, ICESat-2 will collect data between 88 degrees north and south using a high-repetition rate (10 kHz) laser operating at 532nm, and using a photon-counting detection strategy. Our airborne simulator, the Multiple Altimeter Beam Experimental Lidar (MABEL) uses a similar photon-counting measurement strategy, operates at 532nm (16 beams) and 1064 nm (8 beams) to collect similar data to what we expect for ICESat-2. The comparison between frequencies allows for studies of possible penetration of green light into water or snow. MABEL collects more spatially-dense data than ICESat-2 (2cm along-track vs. 70 cm along track for ICESat-2, and has a smaller footprint than ICESat-2 (2m nominal diameter vs. 10m nominal diameter for ICESat-2) requiring geometric and radiometric scaling to relate MABEL data to simulate ICESat-2 data. We based MABEL out of Keflavik, Iceland during April 2012, and collected ~ 100 hours of data from 20km altitude over a variety of targets. MABEL collected sea ice data over the Nares Strait, and off the east coast of Greenland, the later flight in coordination with NASA's Operation IceBridge, which collected ATM data along the same track within 90 minutes of MABEL data collection. MABEL flew a variety of lines over Greenland in the southwest, Jakobshavn region, and over the ice sheet interior, including 4 hours of coincident data with Operation IceBridge in southwest Greenland. MABEL flew a number of calibration sites, including corner cubes in Svalbard, Summit Station (where a GPS survey of the surface elevation was collected within an hour of our overflight), and well-surveyed targets in Iceland and western Greenland. In this presentation, we present an overview of

  17. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  18. Simulation of optical diagnostics for crystal growth: models and results

    NASA Astrophysics Data System (ADS)

    Banish, Michele R.; Clark, Rodney L.; Kathman, Alan D.; Lawson, Shelah M.

    1991-12-01

    A computer simulation of a two-color holographic interferometric (TCHI) optical system was performed using a physical (wave) optics model. This model accurately simulates propagation through time-varying, 2-D or 3-D concentration and temperature fields as a wave phenomenon. The model calculates wavefront deformations that can be used to generate fringe patterns. This simulation modeled a proposed TriGlycine sulphate TGS flight experiment by propagating through the simplified onion-like refractive index distribution of the growing crystal and calculating the recorded wavefront deformation. The phase of this wavefront was used to generate sample interferograms that map index of refraction variation. Two such fringe patterns, generated at different wavelengths, were used to extract the original temperature and concentration field characteristics within the growth chamber. This proves feasibility for this TCHI crystal growth diagnostic technique. This simulation provides feedback to the experimental design process.

  19. Results of a Flight Simulation Software Methods Survey

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce

    1995-01-01

    A ten-page questionnaire was mailed to members of the AIAA Flight Simulation Technical Committee in the spring of 1994. The survey inquired about various aspects of developing and maintaining flight simulation software, as well as a few questions dealing with characterization of each facility. As of this report, 19 completed surveys (out of 74 sent out) have been received. This paper summarizes those responses.

  20. A time-split finite-volume algorithm for three-dimensional flow-field simulation

    NASA Technical Reports Server (NTRS)

    Hung, C. M.; Kordulla, W.

    1983-01-01

    A general finite-volume algorithm is developed for solving three-dimensional, time-dependent, compressible Navier-Stokes equations for high Reynolds number flows over an arbitrary geometry. This algorithm adapts MacCormack's (1982) explicit-implicit scheme to a time-split, three-dimensional finite-volume concept in a general coordinate system. It is shown that the thin-layer approximation in all three spatial directions significantly reduces the evaluation of viscous terms and allows the algorithm to solve more complicated geometries with all boundaries in two or all three directions. The calculated results using this method are found to be in good agreement with the experimental measurements of a blunt-fin induced shock wave and boundary-layer interaction problems. Observations of the existence of peak pressure, primary horseshoe and secondary vortices, and reversed supersonic zones show that computational fluid dynamics can effectively supplement the wind tunnel tests for aerodynamic design as well as for understanding basic fluid dynamics.

  1. Simulation of mid-infrared clutter rejection. 1: One-dimensional LMS spatial filter and adaptive threshold algorithms.

    PubMed

    Longmire, M S; Milton, A F; Takken, E H

    1982-11-01

    Several 1-D signal processing techniques have been evaluated by simulation with a digital computer using high-spatial-resolution (0.15 mrad) noise data gathered from back-lit clouds and uniform sky with a scanning data collection system operating in the 4.0-4.8-microm spectral band. Two ordinary bandpass filters and a least-mean-square (LMS) spatial filter were evaluated in combination with a fixed or adaptive threshold algorithm. The combination of a 1-D LMS filter and a 1-D adaptive threshold sensor was shown to reject extreme cloud clutter effectively and to provide nearly equal signal detection in a clear and cluttered sky, at least in systems whose NEI (noise equivalent irradiance) exceeds 1.5 x 10(-13) W/cm(2) and whose spatial resolution is better than 0.15 x 0.36 mrad. A summary gives highlights of the work, key numerical results, and conclusions. PMID:20396326

  2. Documenting the NASA Armstrong Flight Research Center Oblate Earth Simulation Equations of Motion and Integration Algorithm

    NASA Technical Reports Server (NTRS)

    Clarke, R.; Lintereur, L.; Bahm, C.

    2016-01-01

    A desire for more complete documentation of the National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC), Edwards, California legacy code used in the core simulation has led to this e ort to fully document the oblate Earth six-degree-of-freedom equations of motion and integration algorithm. The authors of this report have taken much of the earlier work of the simulation engineering group and used it as a jumping-o point for this report. The largest addition this report makes is that each element of the equations of motion is traced back to first principles and at no point is the reader forced to take an equation on faith alone. There are no discoveries of previously unknown principles contained in this report; this report is a collection and presentation of textbook principles. The value of this report is that those textbook principles are herein documented in standard nomenclature that matches the form of the computer code DERIVC. Previous handwritten notes are much of the backbone of this work, however, in almost every area, derivations are explicitly shown to assure the reader that the equations which make up the oblate Earth version of the computer routine, DERIVC, are correct.

  3. Prediction of Flood Warning in Taiwan Using Nonlinear SVM with Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, C.

    2013-12-01

    The issue of the floods is important in Taiwan. It is because the narrow and high topography of the island make lots of rivers steep in Taiwan. The tropical depression likes typhoon always causes rivers to flood. Prediction of river flow under the extreme rainfall circumstances is important for government to announce the warning of flood. Every time typhoon passed through Taiwan, there were always floods along some rivers. The warning is classified to three levels according to the warning water levels in Taiwan. The propose of this study is to predict the level of floods warning from the information of precipitation, rainfall duration and slope of riverbed. To classify the level of floods warning by the above-mentioned information and modeling the problems, a machine learning model, nonlinear Support vector machine (SVM), is formulated to classify the level of floods warning. In addition, simulated annealing (SA), a probabilistic heuristic algorithm, is used to determine the optimal parameter of the SVM model. A case study of flooding-trend rivers of different gradients in Taiwan is conducted. The contribution of this SVM model with simulated annealing is capable of making efficient announcement for flood warning and keeping the danger of flood from residents along the rivers.

  4. Modeling multiple communities of interest for interactive simulation and gaming: the dynamic adversarial gaming algorithm project

    NASA Astrophysics Data System (ADS)

    Santos, Eugene, Jr.; Zhao, Qunhua; Pratto, Felicia; Pearson, Adam R.; McQueary, Bruce; Breeden, Andy; Krause, Lee

    2007-04-01

    Nowadays, there is an increasing demand for the military to conduct operations that are beyond traditional warfare. In these operations, analyzing and understanding those who are involved in the situation, how they are going to behave, and why they behave in certain ways is critical for success. The challenge lies in that behavior does not simply follow universal/fixed doctrines; it is significantly influenced by soft factors (i.e. cultural factors, societal norms, etc.). In addition, there is rarely just one isolated enemy; the behaviors and responses of all groups in the region, and the dynamics of the interaction among them composes an important part of the whole picture. The Dynamic Adversarial Gaming Algorithm (DAGA) project aims to provide a wargaming environment for automation of simulating dynamics of geopolitical crisis and eventually be applied to military simulation and training domain, and/or commercial gaming arena. The focus of DAGA is on modeling communities of interest (COIs), where various individuals, groups, and organizations as well as their interactions are captured. The framework should provide a context for COIs to interact with each other and influence others' behaviors. These behaviors must incorporate soft factors by modeling cultural knowledge. We do so by representing cultural variables and their influence on behavior using probabilistic networks. In this paper, we describe our COI modeling, the development of cultural networks, the interaction architecture, and a prototype of DAGA.

  5. Real-Time Simulation for Verification and Validation of Diagnostic and Prognostic Algorithms

    NASA Technical Reports Server (NTRS)

    Aguilar, Robet; Luu, Chuong; Santi, Louis M.; Sowers, T. Shane

    2005-01-01

    To verify that a health management system (HMS) performs as expected, a virtual system simulation capability, including interaction with the associated platform or vehicle, very likely will need to be developed. The rationale for developing this capability is discussed and includes the limited capability to seed faults into the actual target system due to the risk of potential damage to high value hardware. The capability envisioned would accurately reproduce the propagation of a fault or failure as observed by sensors located at strategic locations on and around the target system and would also accurately reproduce the control system and vehicle response. In this way, HMS operation can be exercised over a broad range of conditions to verify that it meets requirements for accurate, timely response to actual faults with adequate margin against false and missed detections. An overview is also presented of a real-time rocket propulsion health management system laboratory which is available for future rocket engine programs. The health management elements and approaches of this lab are directly applicable for future space systems. In this paper the various components are discussed and the general fault detection, diagnosis, isolation and the response (FDIR) concept is presented. Additionally, the complexities of V&V (Verification and Validation) for advanced algorithms and the simulation capabilities required to meet the changing state-of-the-art in HMS are discussed.

  6. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

  7. The Treatment Results of a Standard Algorithm for Choosing the Best Entry Vessel for Intravenous Port Implantation

    PubMed Central

    Wei, Wen-Cheng; Wu, Ching-Yang; Wu, Ching-Feng; Fu, Jui-Ying; Su, Ta-Wei; Yu, Sheng-Yueh; Kao, Tsung-Chi; Ko, Po-Jen

    2015-01-01

    Abstract Vascular cutdown and echo guide puncture methods have its own limitations under certain conditions. There was no available algorithm for choosing entry vessel. A standard algorithm was introduced to help choose the entry vessel location according to our clinical experience and review of the literature. The goal of this study is to analyze the treatment results of the standard algorithm used to choose the entry vessel for intravenous port implantation. During the period between March 2012 and March 2013, 507 patients who received intravenous port implantation due to advanced chemotherapy were included into this study. Choice of entry vessel was according to standard algorithm. All clinical characteristic factors were collected and complication rate and incidence were further analyzed. Compared with our clinical experience in 2006, procedure-related complication rate declined from 1.09% to 0.4%, whereas the late complication rate decreased from 19.97% to 3.55%. No more pneumothorax, hematoma, catheter kinking, fractures, and pocket erosion were identified after using the standard algorithm. In alive oncology patients, 98% implanted port could serve a functional vascular access to fit therapeutic needs. This standard algorithm for choosing the best entry vessel is a simple guideline that is easy to follow. The algorithm has excellent efficiency and can minimize complication rates and incidence. PMID:26287429

  8. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region

    PubMed Central

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786

  9. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    PubMed

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786

  10. Evaluating the sensitivity of the optimization of acquisition geometry to the choice of reconstruction algorithm in digital breast tomosynthesis through a simulation study

    NASA Astrophysics Data System (ADS)

    Zeng, Rongping; Park, Subok; Bakic, Predrag; Myers, Kyle J.

    2015-02-01

    Due to the limited number of views and limited angular span in digital breast tomosynthesis (DBT), the acquisition geometry design is an important factor that affects the image quality. Therefore, intensive studies have been conducted regarding the optimization of the acquisition geometry. However, different reconstruction algorithms were used in most of the reported studies. Because each type of reconstruction algorithm can provide images with its own image resolution, noise properties and artifact appearance, it is unclear whether the optimal geometries concluded for the DBT system in one study can be generalized to the DBT systems with a reconstruction algorithm different to the one applied in that study. Hence, we investigated the effect of the reconstruction algorithm on the optimization of acquisition geometry parameters through carefully designed simulation studies. Our results show that using various reconstruction algorithms, including the filtered back-projection, the simultaneous algebraic reconstruction technique, the maximum-likelihood method and the total-variation regularized least-square method, gave similar performance trends for the acquisition parameters for detecting lesions. The consistency of system ranking indicates that the choice of the reconstruction algorithm may not be critical for DBT system geometry optimization.

  11. Evaluating the sensitivity of the optimization of acquisition geometry to the choice of reconstruction algorithm in digital breast tomosynthesis through a simulation study.

    PubMed

    Zeng, Rongping; Park, Subok; Bakic, Predrag; Myers, Kyle J

    2015-02-01

    Due to the limited number of views and limited angular span in digital breast tomosynthesis (DBT), the acquisition geometry design is an important factor that affects the image quality. Therefore, intensive studies have been conducted regarding the optimization of the acquisition geometry. However, different reconstruction algorithms were used in most of the reported studies. Because each type of reconstruction algorithm can provide images with its own image resolution, noise properties and artifact appearance, it is unclear whether the optimal geometries concluded for the DBT system in one study can be generalized to the DBT systems with a reconstruction algorithm different to the one applied in that study. Hence, we investigated the effect of the reconstruction algorithm on the optimization of acquisition geometry parameters through carefully designed simulation studies. Our results show that using various reconstruction algorithms, including the filtered back-projection, the simultaneous algebraic reconstruction technique, the maximum-likelihood method and the total-variation regularized least-square method, gave similar performance trends for the acquisition parameters for detecting lesions. The consistency of system ranking indicates that the choice of the reconstruction algorithm may not be critical for DBT system geometry optimization. PMID:25591807

  12. MixSim : An R Package for Simulating Data to Study Performance of Clustering Algorithms

    SciTech Connect

    Melnykov, Volodymyr; Chen, Wei-Chen; Maitra, Ranjan

    2012-01-01

    The R package MixSim is a new tool that allows simulating mixtures of Gaussian distributions with different levels of overlap between mixture components. Pairwise overlap, defined as a sum of two misclassification probabilities, measures the degree of interaction between components and can be readily employed to control the clustering complexity of datasets simulated from mixtures. These datasets can then be used for systematic performance investigation of clustering and finite mixture modeling algorithms. Among other capabilities of MixSim, there are computing the exact overlap for Gaussian mixtures, simulating Gaussian and non-Gaussian data, simulating outliers and noise variables, calculating various measures of agreement between two partitionings, and constructing parallel distribution plots for the graphical display of finite mixture models. All features of the package are illustrated in great detail. The utility of the package is highlighted through a small comparison study of several popular clustering algorithms.

  13. A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations

    SciTech Connect

    Ouyang, G; Jandhyala, V; Champagne, N; Sharpe, R; Fasenfest, B J; Rockway, J D

    2004-12-14

    An Asymptotic Wave Expansion (AWE) technique is implemented into the EIGER computational electromagnetics code. The AWE fast frequency sweep is formed by separating the components of the integral equations by frequency dependence, then using this information to find a rational function approximation of the results. The standard AWE method is generalized to work for several integral equations, including the EFIE for conductors and the PMCHWT for dielectrics. The method is also expanded to work for two types of coupled circuit-EM problems as well as lumped load circuit elements. After a simple bisecting adaptive sweep algorithm is developed, dramatic speed improvements are seen for several example problems.

  14. Flight test results of a vector-based failure detection and isolation algorithm for a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, F. R.; Bailey, M. L.; Motyka, P. R.

    1988-01-01

    Flight test results of a vector-based fault-tolerant algorithm for a redundant strapdown inertial measurement unit are presented. Because the inertial sensors provide flight-critical information for flight control and navigation, failure detection and isolation is developed in terms of a multi-level structure. Threshold compensation techniques for gyros and accelerometers, developed to enhance the sensitivity of the failure detection process to low-level failures, are presented. Four flight tests, conducted in a commercial transport type environment, were used to determine the ability of the failure detection and isolation algorithm to detect failure signals, such a hard-over, null, or bias shifts. The algorithm provided timely detection and correct isolation of flight control- and low-level failures. The flight tests of the vector-based algorithm demonstrated its capability to provide false alarm free dual fail-operational performance for the skewed array of inertial sensors.

  15. Albedo in the ATIC Experiment: Results of Measurements and Simulation

    NASA Technical Reports Server (NTRS)

    Sokolskaya, N. V.; Adams, J. H., Jr.; Ahn, H. S.; Bashindzhagyan, G. L.; Batkov, K. E.; Chang, J.; Christl, M.; Fazely, A. R.; Ganel, O.; Gunasingha, R. M.

    2004-01-01

    Characteristics of albedo, or backscatter current, providing a 'background' for calorimeter experiments in high energy cosmic rays are analyzed. The comparison of experimental data obtained in the flights of the ATIC spectrometer is made with simulations performed using the GEANT 3.21 code. The influence of the backscatter on charge resolution in the ATIC experiment is discussed.

  16. SOME RESULTS OF A SIMULATION OF AN URBAN SCHOOL DISTRICT.

    ERIC Educational Resources Information Center

    SISSON, ROGER L.

    A COMPUTER PROGRAM WHICH SIMULATES THE GROSS OPERATIONAL FEATURES OF A LARGE URBAN SCHOOL DISTRICT IS DESIGNED TO PREDICT SCHOOL DISTRICT POLICY VARIABLES ON A YEAR-TO-YEAR BASIS. THE MODEL EXPLORES THE CONSEQUENCES OF VARYING SUCH DISTRICT PARAMETERS AS STUDENT POPULATION, STAFF, COMPUTER EQUIPMENT, NUMBERS AND SIZES OF SCHOOL BUILDINGS, SALARY,…

  17. SIMULATION OF DNAPL DISTRIBUTION RESULTING FROM MULTIPLE SOURCES

    EPA Science Inventory

    A three-dimensional and three-phase (water, NAPL and gas) numerical simulator, called NAPL, was employed to study the interaction between DNAPL (PCE) plumes in a variably saturated porous media. Several model verification tests have been performed, including a series of 2-D labo...

  18. From Simulation to Real Robots with Predictable Results: Methods and Examples

    NASA Astrophysics Data System (ADS)

    Balakirsky, S.; Carpin, S.; Dimitoglou, G.; Balaguer, B.

    From a theoretical perspective, one may easily argue (as we will in this chapter) that simulation accelerates the algorithm development cycle. However, in practice many in the robotics development community share the sentiment that “Simulation is doomed to succeed” (Brooks, R., Matarić, M., Robot Learning, Kluwer Academic Press, Hingham, MA, 1993, p. 209). This comes in large part from the fact that many simulation systems are brittle; they do a fair-to-good job of simulating the expected, and fail to simulate the unexpected. It is the authors' belief that a simulation system is only as good as its models, and that deficiencies in these models lead to the majority of these failures. This chapter will attempt to address these deficiencies by presenting a systematic methodology with examples for the development of both simulated mobility models and sensor models for use with one of today's leading simulation engines. Techniques for using simulation for algorithm development leading to real-robot implementation will be presented, as well as opportunities for involvement in international robotics competitions based on these techniques.

  19. FINAL SIMULATION RESULTS FOR DEMONSTRATION CASE 1 AND 2

    SciTech Connect

    David Sloan; Woodrow Fiveland

    2003-10-15

    The goal of this DOE Vision-21 project work scope was to develop an integrated suite of software tools that could be used to simulate and visualize advanced plant concepts. Existing process simulation software did not meet the DOE's objective of ''virtual simulation'' which was needed to evaluate complex cycles. The overall intent of the DOE was to improve predictive tools for cycle analysis, and to improve the component models that are used in turn to simulate equipment in the cycle. Advanced component models are available; however, a generic coupling capability that would link the advanced component models to the cycle simulation software remained to be developed. In the current project, the coupling of the cycle analysis and cycle component simulation software was based on an existing suite of programs. The challenge was to develop a general-purpose software and communications link between the cycle analysis software Aspen Plus{reg_sign} (marketed by Aspen Technology, Inc.), and specialized component modeling packages, as exemplified by industrial proprietary codes (utilized by ALSTOM Power Inc.) and the FLUENT{reg_sign} computational fluid dynamics (CFD) code (provided by Fluent Inc). A software interface and controller, based on an open CAPE-OPEN standard, has been developed and extensively tested. Various test runs and demonstration cases have been utilized to confirm the viability and reliability of the software. ALSTOM Power was tasked with the responsibility to select and run two demonstration cases to test the software--(1) a conventional steam cycle (designated as Demonstration Case 1), and (2) a combined cycle test case (designated as Demonstration Case 2). Demonstration Case 1 is a 30 MWe coal-fired power plant for municipal electricity generation, while Demonstration Case 2 is a 270 MWe, natural gas-fired, combined cycle power plant. Sufficient data was available from the operation of both power plants to complete the cycle configurations. Three runs

  20. 2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation

    DOE PAGESBeta

    Warren, Michael S.

    2014-01-01

    We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (2 18 ) processors. We present error analysis and scientific application results from a series of more than ten 69 billion (4096 3 ) particle cosmological simulations, accounting for 4×10 20 floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracymore » and scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.« less

  1. Simulating California reservoir operation using the classification and regression-tree algorithm combined with a shuffled cross-validation scheme

    NASA Astrophysics Data System (ADS)

    Yang, Tiantian; Gao, Xiaogang; Sorooshian, Soroosh; Li, Xin

    2016-03-01

    The controlled outflows from a reservoir or dam are highly dependent on the decisions made by the reservoir operators, instead of a natural hydrological process. Difference exists between the natural upstream inflows to reservoirs and the controlled outflows from reservoirs that supply the downstream users. With the decision maker's awareness of changing climate, reservoir management requires adaptable means to incorporate more information into decision making, such as water delivery requirement, environmental constraints, dry/wet conditions, etc. In this paper, a robust reservoir outflow simulation model is presented, which incorporates one of the well-developed data-mining models (Classification and Regression Tree) to predict the complicated human-controlled reservoir outflows and extract the reservoir operation patterns. A shuffled cross-validation approach is further implemented to improve CART's predictive performance. An application study of nine major reservoirs in California is carried out. Results produced by the enhanced CART, original CART, and random forest are compared with observation. The statistical measurements show that the enhanced CART and random forest overperform the CART control run in general, and the enhanced CART algorithm gives a better predictive performance over random forest in simulating the peak flows. The results also show that the proposed model is able to consistently and reasonably predict the expert release decisions. Experiments indicate that the release operation in the Oroville Lake is significantly dominated by SWP allocation amount and reservoirs with low elevation are more sensitive to inflow amount than others.

  2. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    NASA Astrophysics Data System (ADS)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  3. Rarefied gas flow simulations using high-order gas-kinetic unified algorithms for Boltzmann model equations

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen

    2015-04-01

    This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive

  4. An algorithm for automatic measurement of stimulation thresholds: clinical performance and preliminary results.

    PubMed

    Danilovic, D; Ohm, O J; Stroebel, J; Breivik, K; Hoff, P I; Markowitz, T

    1998-05-01

    We have developed an algorithmic method for automatic determination of stimulation thresholds in both cardiac chambers in patients with intact atrioventricular (AV) conduction. The algorithm utilizes ventricular sensing, may be used with any type of pacing leads, and may be downloaded via telemetry links into already implanted dual-chamber Thera pacemakers. Thresholds are determined with 0.5 V amplitude and 0.06 ms pulse-width resolution in unipolar, bipolar, or both lead configurations, with a programmable sampling interval from 2 minutes to 48 hours. Measured values are stored in the pacemaker memory for later retrieval and do not influence permanent output settings. The algorithm was intended to gather information on continuous behavior of stimulation thresholds, which is important in the formation of strategies for programming pacemaker outputs. Clinical performance of the algorithm was evaluated in eight patients who received bipolar tined steroid-eluting leads and were observed for a mean of 5.1 months. Patient safety was not compromised by the algorithm, except for the possibility of pacing during the physiologic refractory period. Methods for discrimination of incorrect data points were developed and incorrect values were discarded. Fine resolution threshold measurements collected during this study indicated that: (1) there were great differences in magnitude of threshold peaking in different patients; (2) the initial intensive threshold peaking was usually followed by another less intensive but longer-lasting wave of threshold peaking; (3) the pattern of tissue reaction in the atrium appeared different from that in the ventricle; and (4) threshold peaking in the bipolar lead configuration was greater than in the unipolar configuration. The algorithm proved to be useful in studying ambulatory thresholds. PMID:9604237

  5. Novel models and algorithms of load balancing for variable-structured collaborative simulation under HLA/RTI

    NASA Astrophysics Data System (ADS)

    Yue, Yingchao; Fan, Wenhui; Xiao, Tianyuan; Ma, Cheng

    2013-07-01

    High level architecture(HLA) is the open standard in the collaborative simulation field. Scholars have been paying close attention to theoretical research on and engineering applications of collaborative simulation based on HLA/RTI, which extends HLA in various aspects like functionality and efficiency. However, related study on the load balancing problem of HLA collaborative simulation is insufficient. Without load balancing, collaborative simulation under HLA/RTI may encounter performance reduction or even fatal errors. In this paper, load balancing is further divided into static problems and dynamic problems. A multi-objective model is established and the randomness of model parameters is taken into consideration for static load balancing, which makes the model more credible. The Monte Carlo based optimization algorithm(MCOA) is excogitated to gain static load balance. For dynamic load balancing, a new type of dynamic load balancing problem is put forward with regards to the variable-structured collaborative simulation under HLA/RTI. In order to minimize the influence against the running collaborative simulation, the ordinal optimization based algorithm(OOA) is devised to shorten the optimization time. Furthermore, the two algorithms are adopted in simulation experiments of different scenarios, which demonstrate their effectiveness and efficiency. An engineering experiment about collaborative simulation under HLA/RTI of high speed electricity multiple units(EMU) is also conducted to indentify credibility of the proposed models and supportive utility of MCOA and OOA to practical engineering systems. The proposed research ensures compatibility of traditional HLA, enhances the ability for assigning simulation loads onto computing units both statically and dynamically, improves the performance of collaborative simulation system and makes full use of the hardware resources.

  6. Lunar Regolith Characterization for Simulant Design and Evaluation using Figure of Merit Algorithms

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Douglas L.; Melemore, Carole A.; Fikes, John C.; Stoeser, Douglas B.; Wentworth, Susan J.; McKay, David S.

    2009-01-01

    NASA's Marshall Space Flight Center (MSFC), in conjunction with the United States Geological Survey (USGS) and aided by personnel from the Astromaterials Research and Exploration Science group at Johnson Space Center (ARES-JSC), is implementing a new data acquisition strategy to support the development and evaluation of lunar regolith simulants. The first analyses of lunar regolith samples by the simulant group were carried out in early 2008 on samples from Apollo 16 core 64001/64002. The results of these analyses are combined with data compiled from the literature to generate a reference composition and particle size distribution (PSD)) for lunar highlands regolith. In this paper we present the specifics of particle type composition and PSD for this reference composition. Furthermore. we use Figure-of-Merit (FoM) routines to measure the characteristics of a number of lunar regolith simulants against this reference composition. The lunar highlands regolith reference composition and the FoM results are presented to guide simulant producers and simulant users in their research and development processes.

  7. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    NASA Astrophysics Data System (ADS)

    Densmore, J. D.; Park, H.; Wollaber, A. B.; Rauenzahn, R. M.; Knoll, D. A.

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption-emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck-Cummings algorithm.

  8. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    SciTech Connect

    Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.

    2015-03-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.

  9. Photometric redshifts with the quasi Newton algorithm (MLPQNA) Results in the PHAT1 contest

    NASA Astrophysics Data System (ADS)

    Cavuoti, S.; Brescia, M.; Longo, G.; Mercurio, A.

    2012-10-01

    Context. Since the advent of modern multiband digital sky surveys, photometric redshifts (photo-z's) have become relevant if not crucial to many fields of observational cosmology, such as the characterization of cosmic structures and the weak and strong lensing. Aims: We describe an application to an astrophysical context, namely the evaluation of photometric redshifts, of MLPQNA, which is a machine-learning method based on the quasi Newton algorithm. Methods: Theoretical methods for photo-z evaluation are based on the interpolation of a priori knowledge (spectroscopic redshifts or SED templates), and they represent an ideal comparison ground for neural network-based methods. The MultiLayer Perceptron with quasi Newton learning rule (MLPQNA) described here is an effective computing implementation of neural networks exploited for the first time to solve regression problems in the astrophysical context. It is offered to the community through the DAMEWARE (DAta Mining & Exploration Web Application REsource) infrastructure. Results: The PHAT contest (Hildebrandt et al. 2010, A&A, 523, A31) provides a standard dataset to test old and new methods for photometric redshift evaluation and with a set of statistical indicators that allow a straightforward comparison among different methods. The MLPQNA model has been applied on the whole PHAT1 dataset of 1984 objects after an optimization of the model performed with the 515 available spectroscopic redshifts as training set. When applied to the PHAT1 dataset, MLPQNA obtains the best bias accuracy (0.0006) and very competitive accuracies in terms of scatter (0.056) and outlier percentage (16.3%), scoring as the second most effective empirical method among those that have so far participated in the contest. MLPQNA shows better generalization capabilities than most other empirical methods especially in the presence of underpopulated regions of the knowledge base.

  10. Comparison of image deconvolution algorithms on simulated and laboratory infrared images

    SciTech Connect

    Proctor, D.

    1994-11-15

    We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.

  11. Configuration of the electron transport algorithm of PENELOPE to simulate ion chambers

    NASA Astrophysics Data System (ADS)

    Sempau, J.; Andreo, P.

    2006-07-01

    The stability of the electron transport algorithm implemented in the Monte Carlo code PENELOPE with respect to variations of its step length is analysed in the context of the simulation of ion chambers used in photon and electron dosimetry. More precisely, the degree of violation of the Fano theorem is quantified (to the 0.1% level) as a function of the simulation parameters that determine the step size. To meet the premises of the theorem, we define an infinite graphite phantom with a cavity delimited by two parallel planes (i.e., a slab) and filled with a 'gas' that has the same composition as graphite but a mass density a thousand-fold smaller. The cavity walls and the gas have identical cross sections, including the density effect associated with inelastic collisions. Electrons with initial kinetic energies equal to 0.01, 0.1, 1, 10 or 20 MeV are generated in the wall and in the gas with a uniform intensity per unit mass. Two configurations, motivated by the design of pancake- and thimble-type chambers, are considered, namely, with the initial direction of emission perpendicular or parallel to the gas-wall interface. This version of the Fano test avoids the need of photon regeneration and the calculation of photon energy absorption coefficients, two ingredients that are common to some alternative definitions of equivalent tests. In order to reduce the number of variables in the analysis, a global new simulation parameter, called the speedup parameter (a), is introduced. It is shown that setting a = 0.2, corresponding to values of the usual PENELOPE parameters of C1 = C2 = 0.02 and values of WCC and WCR that depend on the initial and absorption energies, is appropriate for maximum tolerances of the order of 0.2% with respect to an analogue, i.e., interaction-by-interaction, simulation of the same problem. The precise values of WCC and WCR do not seem to be critical to achieve this level of accuracy. The step-size dependence of the absorbed dose is explained in

  12. Near real-time expectation-maximization algorithm: computational performance and passive millimeter-wave imaging field test results

    NASA Astrophysics Data System (ADS)

    Reynolds, William R.; Talcott, Denise; Hilgers, John W.

    2002-07-01

    A new iterative algorithm (EMLS) via the expectation maximization method is derived for extrapolating a non- negative object function from noisy, diffraction blurred image data. The algorithm has the following desirable attributes; fast convergence is attained for high frequency object components, is less sensitive to constraint parameters, and will accommodate randomly missing data. Speed and convergence results are presented. Field test imagery was obtained with a passive millimeter wave imaging sensor having a 30.5 cm aperture. The algorithm was implemented and tested in near real time using field test imagery. Theoretical results and experimental results using the field test imagery will be compared using an effective aperture measure of resolution increase. The effective aperture measure, based on examination of the edge-spread function, will be detailed.

  13. Direct Numerical Simulation of Acoustic Waves Interacting with a Shock Wave in a Quasi-1D Convergent-Divergent Nozzle Using an Unstructured Finite Volume Algorithm

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.; Mankbadi, Reda R.

    1995-01-01

    Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.

  14. Simulations Build Efficacy: Empirical Results from a Four-Week Congressional Simulation

    ERIC Educational Resources Information Center

    Mariani, Mack; Glenn, Brian J.

    2014-01-01

    This article describes a four-week congressional committee simulation implemented in upper level courses on Congress and the Legislative process at two liberal arts colleges. We find that the students participating in the simulation possessed high levels of political knowledge and confidence in their political skills prior to the simulation. An…

  15. Measurement based simulation of microscope deviations for evaluation of stitching algorithms for the extension of Fourier-based alignment

    NASA Astrophysics Data System (ADS)

    Engelke, Florian; Kästner, Markus; Reithmeier, Eduard

    2013-05-01

    Image stitching is a technique used to measure large surface areas with high resolution while maintaining a large field of view. We work on improving data fusion by stitching in the field of microscopic analysis of technical surfaces for structures and roughness. Guidance errors and imaging errors such as noise cause problems for seamless image fusion of technical surfaces. The optical imaging errors of 3D Microscopes, such as confocal microscopes and white light interferometers, as well as the guidance errors of their automated positioning systems have been measured to create a software to simulate automated measurements of known surfaces with specific deviations to test new stitching algorithms. We measured and incorporated radial image distortion, interferometer reference mirror shape deviations, statistical noise, drift of the positional axis, on-axis-accuracy and repeatability of the used positioning stages and misalignment of the CCD-Chip with respect to the axes of motion. We used the resulting simulation of the measurement process to test a new image registration technique that allows for the use of correlation of images by fast fourier transform for small overlaps between single measurements.

  16. Shock focusing flow field simulated by a high-resolution numerical algorithm

    NASA Astrophysics Data System (ADS)

    Jung, Y. G.; Chang, K. S.

    2012-11-01

    Shock-focusing concave reflector is a very simple and effective tool to obtain a high-pressure pulse wave near the physical focal point. In the past, many optical images were obtained through experimental studies. However, measurement of field variables is not easy because the phenomenon is of short duration and the magnitude of shock waves is varied from pulse to pulse due to poor reproducibility. Using a wave propagation algorithm and the Cartesian embedded boundary method, we have successfully obtained numerical schlieren images that resemble the experimental results. By the numerical results, various field variables, such as pressure, density and vorticity, become available for the better understanding and design of shock focusing devices.

  17. The Multi Level Multi Domain (MLMD) method: a semi-implicit adaptive algorithm for Particle In Cell plasma simulations

    NASA Astrophysics Data System (ADS)

    Innocenti, Maria Elena; Beck, Arnaud; Markidis, Stefano; Lapenta, Giovanni

    2013-10-01

    Particle in Cell (PIC) simulations of plasmas are not bound anymore by the stability constraints of explicit algorithms. Semi implicit and fully implicit methods allow to use larger grid spacings and time steps. Adaptive Mesh Refinement (AMR) techniques permit to locally change the simulation resolution. The code proposed in Innocenti et al., 2013 and Beck et al., 2013 is however the first to combine the advantages of both. The use of the Implicit Moment Method allows to taylor the resolution used in each level to the physical scales of interest and to use high Refinement Factors (RF) between the levels. The Multi Level Multi Domain (MLMD) structure, where all levels are simulated as complete domains, conjugates algorithmic and practical advantages. The different levels evolve according to the local dynamics and achieve optimal level interlocking. Also, the capabilities of the Object Oriented programming model are fully exploited. The MLMD algorithm is demonstrated with magnetic reconnection and collisionless shocks simulations with very high RFs between the levels. Notable computational gains are achieved with respect to simulations performed on the entire domain with the higher resolution. Beck A. et al. (2013). submitted. Innocenti M. E. et al. (2013). JCP, 238(0):115-140.

  18. Noise characterization of block-iterative reconstruction algorithms: II. Monte Carlo simulations.

    PubMed

    Soares, Edward J; Glick, Stephen J; Hoppin, John W

    2005-01-01

    In Soares et al. (2000), the ensemble statistical properties of the rescaled block-iterative expectation-maximization (RBI-EM) reconstruction algorithm and rescaled block-iterative simultaneous multiplicative algebraic reconstruction technique (RBI-SMART) were derived. Included in this analysis were the special cases of RBI-EM, maximum-likelihood EM (ML-EM) and ordered-subset EM (OS-EM), and the special case of RBI-SMART, SMART. Explicit expressions were found for the ensemble mean, covariance matrix, and probability density function of RBI reconstructed images, as a function of iteration number. The theoretical formulations relied on one approximation, namely that the noise in the reconstructed image was small compared to the mean image. In this paper, we evaluate the predictions of the theory by using Monte Carlo methods to calculate the sample statistical properties of each algorithm and then compare the results with the theoretical formulations. In addition, the validity of the approximation will be justified. PMID:15638190

  19. Projector Augmented Wave (PAW) Datasets for Multi-Mbar Simulations: An Evolutionary Algorithm Based Recipe

    NASA Astrophysics Data System (ADS)

    Sarkar, K.; Topsakal, M.; Wentzcovitch, R. M.

    2015-12-01

    We attempt to achieve the accuracy of full-potential linearized augmented-plane-wave (FLAPW) method, as implemented in the WIEN2k code, at the favorable computational efficiency of the projector augmented wave (PAW) method for ab initio calculations of solids. For decades, PAW datasets have been generated by manually choosing its parameters and by visually inspecting its logarithmic derivatives, partial wave, and projector basis set. In addition to being tedious and error-prone, this procedure is inadequate because it is impractical to manually explore the full parameter space, as an infinite number of PAW parameter sets for a given augmentation radius can be generated maintaining all the constraints on logarithmic derivatives and basis sets. Performance verification of all plausible solutions against FLAPW is also impractical. Here we report the development of a hybrid algorithm to construct optimized PAW basis sets that can closely reproduce FLAPW results from zero to ultra-high pressures. The approach applies evolutionary computing (EC) to generate optimum PAW parameter sets using the ATOMPAW code. We have the Quantum ESPRESSO distribution to generate equation of state (EOS) to be compared with WIEN2k EOSs set as target. Softer PAW potentials reproducing yet more closely FLAPW EOSs can be found with this method. We demonstrate its working principles and workability by optimizing PAW basis functions for carbon, magnesium, aluminum, silicon, calcium, and iron atoms. The algorithm requires minimal user intervention in a sense that there is no requirement of visual inspection of logarithmic derivatives or of projector functions.

  20. Genetic Algorithm Based Simulated Annealing Method for Solving Unit Commitment Problem in Utility System

    NASA Astrophysics Data System (ADS)

    Rajan, C. Christober Asir

    2010-10-01

    The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.

  1. Fault induction dynamic model, suitable for computer simulation: Simulation results and experimental validation

    NASA Astrophysics Data System (ADS)

    Baccarini, Lane Maria Rabelo; de Menezes, Benjamim Rodrigues; Caminhas, Walmir Matos

    2010-01-01

    The study of induction motor behavior under not normal conditions and the ability to detect and predict these conditions has been an area of increasing interest. Early detection and diagnosis of incipient faults are desirable for interactive evaluation over the running condition, product quality guarantee, and improved operational efficiency of induction motors. The main difficulty in this task is the lack of accurate analytical models to describe a faulty motor. This paper proposes a dynamic model to analyze electrical and mechanical faults in induction machines and includes net asymmetries and load conditions. The model permits to analyze the interactions between different faults in order to detect possible false alarms. Simulations and experimental results were performed to confirm the validity of the model.

  2. Direct drive: Simulations and results from the National Ignition Facility

    DOE PAGESBeta

    Radha, P. B.; Hohenberger, M.; Edgell, D. H.; Marozas, J. A.; Marshall, F. J.; Michel, D. T.; Rosenberg, M. J.; Seka, W.; Shvydky, A.; Boehly, T. R.; et al

    2016-04-19

    Here, the direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivitymore » analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.« less

  3. Direct drive: Simulations and results from the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Radha, P. B.; Hohenberger, M.; Edgell, D. H.; Marozas, J. A.; Marshall, F. J.; Michel, D. T.; Rosenberg, M. J.; Seka, W.; Shvydky, A.; Boehly, T. R.; Collins, T. J. B.; Campbell, E. M.; Craxton, R. S.; Delettrez, J. A.; Dixit, S. N.; Frenje, J. A.; Froula, D. H.; Goncharov, V. N.; Hu, S. X.; Knauer, J. P.; McCrory, R. L.; McKenty, P. W.; Meyerhofer, D. D.; Moody, J.; Myatt, J. F.; Petrasso, R. D.; Regan, S. P.; Sangster, T. C.; Sio, H.; Skupsky, S.; Zylstra, A.

    2016-05-01

    Direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivity analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.

  4. Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise

    NASA Astrophysics Data System (ADS)

    Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej

    2010-11-01

    The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.

  5. Implicit electrostatic particle-in-cell/Monte Carlo simulation for the magnetized plasma: Algorithms and application in gas-inductive breakdown

    NASA Astrophysics Data System (ADS)

    Wang, Hong-Yu; Sun, Peng; Jiang, Wei; Zhou, Jie; Xie, Bai-Song

    2015-06-01

    An implicit electrostatic particle-in-cell/Monte Carlo (PIC/MC) algorithm is developed for the magnetized discharging device simulation. The inductive driving force can be considered. The direct implicit PIC algorithm (DIPIC) and energy conservation scheme are applied together and the grid heating can be eliminated in most cases. A tensor-susceptibility Poisson equation is constructed. Its discrete form is made up by a hybrid scheme in one-dimensional (1D) and two-dimensional (2D) cylindrical systems. A semi-coarsening multigrid method is used to solve the discrete system. The algorithm is applied to simulate the cylindrical magnetized target fusion (MTF) pre-ionization process and get qualitatively correct results. The potential application of the algorithm is discussed briefly. Project supported by the National Natural Science Foundation of China (Grant Nos. 11275007, 11105057, 11175023, and 11275039). One of the author (Wang H Y) is supported by Program for Liaoning Excellent Talents in University (Grant No. LJQ2012098).

  6. SU-E-J-89: Comparative Analysis of MIM and Velocity’s Image Deformation Algorithm Using Simulated KV-CBCT Images for Quality Assurance

    SciTech Connect

    Cline, K; Narayanasamy, G; Obediat, M; Stanley, D; Stathakis, S; Kirby, N; Kim, H

    2015-06-15

    Purpose: Deformable image registration (DIR) is used routinely in the clinic without a formalized quality assurance (QA) process. Using simulated deformations to digitally deform images in a known way and comparing to DIR algorithm predictions is a powerful technique for DIR QA. This technique must also simulate realistic image noise and artifacts, especially between modalities. This study developed an algorithm to create simulated daily kV cone-beam computed-tomography (CBCT) images from CT images for DIR QA between these modalities. Methods: A Catphan and physical head-and-neck phantom, with known deformations, were used. CT and kV-CBCT images of the Catphan were utilized to characterize the changes in Hounsfield units, noise, and image cupping that occur between these imaging modalities. The algorithm then imprinted these changes onto a CT image of the deformed head-and-neck phantom, thereby creating a simulated-CBCT image. CT and kV-CBCT images of the undeformed and deformed head-and-neck phantom were also acquired. The Velocity and MIM DIR algorithms were applied between the undeformed CT image and each of the deformed CT, CBCT, and simulated-CBCT images to obtain predicted deformations. The error between the known and predicted deformations was used as a metric to evaluate the quality of the simulated-CBCT image. Ideally, the simulated-CBCT image registration would produce the same accuracy as the deformed CBCT image registration. Results: For Velocity, the mean error was 1.4 mm for the CT-CT registration, 1.7 mm for the CT-CBCT registration, and 1.4 mm for the CT-simulated-CBCT registration. These same numbers were 1.5, 4.5, and 5.9 mm, respectively, for MIM. Conclusion: All cases produced similar accuracy for Velocity. MIM produced similar values of accuracy for CT-CT registration, but was not as accurate for CT-CBCT registrations. The MIM simulated-CBCT registration followed this same trend, but overestimated MIM DIR errors relative to the CT

  7. Implementation and Simulation Results using Autonomous Aerobraking Development Software

    NASA Technical Reports Server (NTRS)

    Maddock, Robert W.; DwyerCianciolo, Alicia M.; Bowes, Angela; Prince, Jill L. H.; Powell, Richard W.

    2011-01-01

    An Autonomous Aerobraking software system is currently under development with support from the NASA Engineering and Safety Center (NESC) that would move typically ground-based operations functions to onboard an aerobraking spacecraft, reducing mission risk and mission cost. The suite of software that will enable autonomous aerobraking is the Autonomous Aerobraking Development Software (AADS) and consists of an ephemeris model, onboard atmosphere estimator, temperature and loads prediction, and a maneuver calculation. The software calculates the maneuver time, magnitude and direction commands to maintain the spacecraft periapsis parameters within design structural load and/or thermal constraints. The AADS is currently tested in simulations at Mars, with plans to also evaluate feasibility and performance at Venus and Titan.

  8. Simulation of heat waves in climate models using large deviation algorithms

    NASA Astrophysics Data System (ADS)

    Ragone, Francesco; Bouchet, Freddy; Wouters, Jeroen

    2016-04-01

    One of the goals of climate science is to characterize the statistics of extreme, potentially dangerous events (e.g. exceptionally intense precipitations, wind gusts, heat waves) in the present and future climate. The study of extremes is however hindered by both a lack of past observational data for events with a return time larger than decades or centuries, and by the large computational cost required to perform a proper sampling of extreme statistics with state of the art climate models. The study of the dynamics leading to extreme events is especially difficult as it requires hundreds or thousands of realizations of the dynamical paths leading to similar extremes. We will discuss here a new numerical algorithm, based on large deviation theory, that allows to efficiently sample very rare events in complex climate models. A large ensemble of realizations are run in parallel, and selection and cloning procedures are applied in order to oversample the trajectories leading to the extremes of interest. The statistics and characteristic dynamics of the extremes can then be computed on a much larger sample of events. This kind of importance sampling method belongs to a class of genetic algorithms that have been successfully applied in other scientific fields (statistical mechanics, complex biomolecular dynamics), allowing to decrease by orders of magnitude the numerical cost required to sample extremes with respect to standard direct numerical sampling. We study the applicability of this method to the computation of the statistics of European surface temperatures with the Planet Simulator (Plasim), an intermediate complexity general circulation model of the atmosphere. We demonstrate the efficiency of the method by comparing its performances against standard approaches. Dynamical paths leading to heat waves are studied, enlightening the relation of Plasim heat waves with blocking events, and the dynamics leading to these events. We then discuss the feasibility of this

  9. Finite-Difference Algorithm for Simulating 3D Electromagnetic Wavefields in Conductive Media

    NASA Astrophysics Data System (ADS)

    Aldridge, D. F.; Bartel, L. C.; Knox, H. A.

    2013-12-01

    Electromagnetic (EM) wavefields are routinely used in geophysical exploration for detection and characterization of subsurface geological formations of economic interest. Recorded EM signals depend strongly on the current conductivity of geologic media. Hence, they are particularly useful for inferring fluid content of saturated porous bodies. In order to enhance understanding of field-recorded data, we are developing a numerical algorithm for simulating three-dimensional (3D) EM wave propagation and diffusion in heterogeneous conductive materials. Maxwell's equations are combined with isotropic constitutive relations to obtain a set of six, coupled, first-order partial differential equations governing the electric and magnetic vectors. An advantage of this system is that it does not contain spatial derivatives of the three medium parameters electric permittivity, magnetic permeability, and current conductivity. Numerical solution methodology consists of explicit, time-domain finite-differencing on a 3D staggered rectangular grid. Temporal and spatial FD operators have order 2 and N, where N is user-selectable. We use an artificially-large electric permittivity to maximize the FD timestep, and thus reduce execution time. For the low frequencies typically used in geophysical exploration, accuracy is not unduly compromised. Grid boundary reflections are mitigated via convolutional perfectly matched layers (C-PMLs) imposed at the six grid flanks. A shared-memory-parallel code implementation via OpenMP directives enables rapid algorithm execution on a multi-thread computational platform. Good agreement is obtained in comparisons of numerically-generated data with reference solutions. EM wavefields are sourced via point current density and magnetic dipole vectors. Spatially-extended inductive sources (current carrying wire loops) are under development. We are particularly interested in accurate representation of high-conductivity sub-grid-scale features that are common

  10. Accelerating dissipative particle dynamics simulations on GPUs: Algorithms, numerics and applications

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Karniadakis, George Em

    2014-11-01

    We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been

  11. Simulating the spatial distribution of snow pack and snow melt runoff with different snow melt algorithms in a physics based watershed model

    NASA Astrophysics Data System (ADS)

    Follum, M. L.; Downer, C. W.; Niemann, J. D.

    2014-12-01

    The Gridded Surface Subsurface Hydrologic Analysis (GSSHA) is a fully distributed, physics based, continuous watershed simulator developed and applied by the US Army Engineer Research and Development Center (ERDC). By taking advantage of the inherent spatial and temporal variability contained in the model, we are able to simulate the effects of slope and solar shading due to topography, vegetation, and varying hydrometeorology with elevation, on snow accumulation and melt. Combined with vertical and lateral melt-water transport algorithms, we are able to simulate the spatial distribution of the snow pack over time and the melt water discharge at the Senator Beck Basin, a small (2.9 Km2) basin in southern Colorado. We simulate snow accumulation and melt at the basin using three different snow melt algorithms in the GSSHA model and compare model results to satellite-derived area maps, point snow water equivalent data, point soil moisture data, and streamflow at the basin outlet. Results indicate that energy balance based methods produce better results than temperature based methods.

  12. Generalization of the FDTD algorithm for simulations of hydrodynamic nonlinear Drude model

    SciTech Connect

    Liu Jinjie; Brio, Moysey; Zeng Yong; Zakharian, Armis R.; Hoyer, Walter; Koch, Stephan W.; Moloney, Jerome V.

    2010-08-20

    In this paper we present a numerical method for solving a three-dimensional cold-plasma system that describes electron gas dynamics driven by an external electromagnetic wave excitation. The nonlinear Drude dispersion model is derived from the cold-plasma fluid equations and is coupled to the Maxwell's field equations. The Finite-Difference Time-Domain (FDTD) method is applied for solving the Maxwell's equations in conjunction with the time-split semi-implicit numerical method for the nonlinear dispersion and a physics based treatment of the discontinuity of the electric field component normal to the dielectric-metal interface. The application of the proposed algorithm is illustrated by modeling light pulse propagation and second-harmonic generation (SHG) in metallic metamaterials (MMs), showing good agreement between computed and published experimental results.

  13. Parallelizable flood fill algorithm and corrective interface tracking approach applied to the simulation of multiple finite size bubbles merging with a free surface

    NASA Astrophysics Data System (ADS)

    Lafferty, Nathan; Badreddine, Hassan; Niceno, Bojan; Prasser, Horst-Michael

    2015-11-01

    A parallelizable flood fill algorithm is developed for identifying and tracking closed regions of fluids, dispersed phases, in CFD simulations of multiphase flows. It is used in conjunction with a newly developed method, corrective interface tracking, for simulating finite size dispersed bubbly flows in which the bubbles are too small relative to the grid to be simulated accurately with interface tracking techniques and too large relative to the grid for Lagrangian particle tracking techniques. The latter situation arising if local bubble induced turbulence is resolved, or modeled with LES. With corrective interface tracking the governing equations are solved on a static Eulerian grid. A correcting force, derived from empirical correlation based hydrodynamic forces, is applied to the bubble which is then advected using interface tracking techniques. This method results in accurate fluid-gas two-way coupling, bubble shapes, and terminal rise velocities. The flood fill algorithm and corrective interface tracking technique are applied to an air/water simulation of multiple bubbles rising and merging with a free surface. They are then validated against the same simulation performed using only interface tracking with a much finer grid.

  14. Results with an Algorithmic Approach to Hybrid Repair of the Aortic Arch

    PubMed Central

    Andersen, Nicholas D.; Williams, Judson B.; Hanna, Jennifer M.; Shah, Asad A.; McCann, Richard L.; Hughes, G. Chad

    2013-01-01

    Objective Hybrid repair of the transverse aortic arch may allow for aortic arch repair with reduced morbidity in patients who are suboptimal candidates for conventional open surgery. Here, we present our results with an algorithmic approach to hybrid arch repair, based upon the extent of aortic disease and patient comorbidities. Methods Between August 2005 and January 2012, 87 patients underwent hybrid arch repair by three principal procedures: zone 1 endograft coverage with extra-anatomic left carotid revascularization (zone 1, n=19), zone 0 endograft coverage with aortic arch debranching (zone 0, n=48), or total arch replacement with staged stented elephant trunk completion (stented elephant trunk, n=20). Results The mean patient age was 64 years and the mean expected in-hospital mortality rate was 16.3% as calculated by the EuroSCORE II. 22% (n=19) of operations were non-elective. Sternotomy, cardiopulmonary bypass, and deep hypothermic circulatory arrest were required in 78% (n=68), 45% (n=39), and 31% (n=27) of patients, respectively, to allow for total arch replacement, arch debranching, or other concomitant cardiac procedures, including ascending ± hemi-arch replacement in 17% (n=8) of patients undergoing zone 0 repair. All stented elephant trunk procedures (n=20) and 19% (n=9) of zone 0 procedures were staged, with 41% (n=12) of patients undergoing staged repair during a single hospitalization. The 30-day/in-hospital rates of stroke and permanent paraplegia/paraparesis were 4.6% (n=4) and 1.2% (n=1), respectively. Three of 27 (11.1%) patients with native ascending aorta zone 0 proximal landing zone experienced retrograde type A dissection following endograft placement. The overall in-hospital mortality rate was 5.7% (n=5), however, 30-day/in-hospital mortality increased to 14.9% (n=13) due to eight 30-day out-of-hospital deaths. Native ascending aorta zone 0 endograft placement was found to be the only univariate predictor of 30-day/in-hospital mortality

  15. Chromium coatings by HVOF thermal spraying: Simulation and practical results

    SciTech Connect

    Knotek, O.; Lugscheider, E.; Jokiel, P.; Schnaut, U.; Wiemers, A.

    1994-12-31

    Within recent years High Velocity Oxygen-Fuel (HVOF) thermal spraying has been considered an asset to the family of thermal spraying processes. Especially for spray materials with melting points below 3,000 K it has proven successful, since it shows advantages when compared to coating processes that produce similar qualities. In order to enlarge the fields of thermal spraying applications into regions with rather low thickness, e.g. about 50--100 {micro}m, especially HVOF thermally sprayed coatings seem to be advantageous. The usual evaluation of optimized spraying parameters, including spray distance, traverse speed, gas flow rates etc. is, however, based on numerous and extensive experiments laid out by trial-and-error or statistical experimental design and thus being expensive: man-power and material is required, spray systems are occupied for experimental works and the optimal solution is questioned, for instance, when a new powder fraction or nozzle is used. In this paper the possibility of reducing such experimental efforts by using modeling and simulation is exemplified for producing thin chromium coatings with a CDS{trademark}-HVOF system. The aim is the production of thermally sprayed chromium coatings competing with galvanic hard chromium platings, which are applied to reduce friction and corrosion but are environmentally disadvantageous during their production.

  16. Stellar populations of stellar halos: Results from the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Cook, B. A.; Conroy, C.; Pillepich, A.; Hernquist, L.

    2016-08-01

    The influence of both major and minor mergers is expected to significantly affect gradients of stellar ages and metallicities in the outskirts of galaxies. Measurements of observed gradients are beginning to reach large radii in galaxies, but a theoretical framework for connecting the findings to a picture of galactic build-up is still in its infancy. We analyze stellar populations of a statistically representative sample of quiescent galaxies over a wide mass range from the Illustris simulation. We measure metallicity and age profiles in the stellar halos of quiescent Illustris galaxies ranging in stellar mass from 1010 to 1012 M ⊙, accounting for observational projection and luminosity-weighting effects. We find wide variance in stellar population gradients between galaxies of similar mass, with typical gradients agreeing with observed galaxies. We show that, at fixed mass, the fraction of stars born in-situ within galaxies is correlated with the metallicity gradient in the halo, confirming that stellar halos contain unique information about the build-up and merger histories of galaxies.

  17. SLUDGE BATCH 4 SIMULANT FLOWSHEET STUDIES: PHASE II RESULTS

    SciTech Connect

    Stone, M; David Best, D

    2006-09-12

    The Defense Waste Processing Facility (DWPF) will transition from Sludge Batch 3 (SB3) processing to Sludge Batch 4 (SB4) processing in early fiscal year 2007. Tests were conducted using non-radioactive simulants of the expected SB4 composition to determine the impact of varying the acid stoichiometry during the Sludge Receipt and Adjustment Tank (SRAT) process. The work was conducted to meet the Technical Task Request (TTR) HLW/DWPF/TTR-2004-0031 and followed the guidelines of a Task Technical and Quality Assurance Plan (TT&QAP). The flowsheet studies are performed to evaluate the potential chemical processing issues, hydrogen generation rates, and process slurry rheological properties as a function of acid stoichiometry. Initial SB4 flowsheet studies were conducted to guide decisions during the sludge batch preparation process. These studies were conducted with the estimated SB4 composition at the time of the study. The composition has changed slightly since these studies were completed due to changes in the sludges blended to prepare SB4 and the estimated SB3 heel mass. The following TTR requirements were addressed in this testing: (1) Hydrogen and nitrous oxide generation rates as a function of acid stoichiometry; (2) Acid quantities and processing times required for mercury removal; (3) Acid quantities and processing times required for nitrite destruction; and (4) Impact of SB4 composition (in particular, oxalate, manganese, nickel, mercury, and aluminum) on DWPF processing (i.e. acid addition strategy, foaming, hydrogen generation, REDOX control, rheology, etc.).

  18. Layered analytical radiative transfer model for simulating water color of coastal waters and algorithm development

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R., Jr.; Huddleston, Lisa H.

    2000-12-01

    A remote sensing reflectance model, which describes the transfer of irradiant light within a homogeneous water column has previously been used to simulate the nadir viewing reflectance just above or below the water surface by Bostater, et al. Wavelength dependent features in the water surface reflectance depend upon the nature of the down welling irradiance, bottom reflectance and the water absorption and backscatter coefficients. The latter are very important coefficients, and depend upon the constituents in water and both vary as a function of the water depth and wavelength in actual water bodies. This paper describes a preliminary approach for the analytical solution of the radiative transfer equations in a two-stream representation of the irradiance field with variable coefficients due to the depth dependent water concentrations of substances such as chlorophyl pigments, dissolved organic matter and suspended particulate matter. The analytical model formulation makes use of analytically based solutions to the 2-flow equations. However, in this paper we describe the use of the unique Cauchy boundary conditions previously used, along with a matrix solution to allow for the prediction of the synthetic water surface reflectance signatures within a nonhomogeneous medium. Observed reflectance signatures as well as model derived 'synthetic signatures' are processed using efficient algorithms which demonstrate the error induced using the layered matrix approach is much less than 1 percent when compared to the analytical homogeneous water column solution. The influence of vertical gradients of water constituents may be extremely important in remote sensing of coastal water constituents as well as in remote sensing of submerged targets and different bottom types such as corals, sea grasses and sand.

  19. Generalized Scalable Multiple Copy Algorithms for Molecular Dynamics Simulations in NAMD.

    PubMed

    Jiang, Wei; Phillips, James C; Huang, Lei; Fajer, Mikolai; Meng, Yilin; Gumbart, James C; Luo, Yun; Schulten, Klaus; Roux, Benoît

    2014-03-01

    Computational methodologies that couple the dynamical evolution of a set of replicated copies of a system of interest offer powerful and flexible approaches to characterize complex molecular processes. Such multiple copy algorithms (MCAs) can be used to enhance sampling, compute reversible work and free energies, as well as refine transition pathways. Widely used examples of MCAs include temperature and Hamiltonian-tempering replica-exchange molecular dynamics (T-REMD and H-REMD), alchemical free energy perturbation with lambda replica-exchange (FEP/λ-REMD), umbrella sampling with Hamiltonian replica exchange (US/H-REMD), and string method with swarms-of-trajectories conformational transition pathways. Here, we report a robust and general implementation of MCAs for molecular dynamics (MD) simulations in the highly scalable program NAMD built upon the parallel programming system Charm++. Multiple concurrent NAMD instances are launched with internal partitions of Charm++ and located continuously within a single communication world. Messages between NAMD instances are passed by low-level point-to-point communication functions, which are accessible through NAMD's Tcl scripting interface. The communication-enabled Tcl scripting provides a sustainable application interface for end users to realize generalized MCAs without modifying the source code. Illustrative applications of MCAs with fine-grained inter-copy communication structure, including global lambda exchange in FEP/λ-REMD, window swapping US/H-REMD in multidimensional order parameter space, and string method with swarms-of-trajectories were carried out on IBM Blue Gene/Q to demonstrate the versatility and massive scalability of the present implementation. PMID:24944348

  20. Development of a Massively Parallel Particle-Mesh Algorithm for Simulations of Galaxy Dynamics and Plasmas

    NASA Astrophysics Data System (ADS)

    Wallin, John

    1996-01-01

    Particle-mesh calculations treat forces and potentials as field quantities which are represented approximately on a mesh. A system of particles is mapped onto this mesh as a density distribution of mass or charge. The Fourier transform is used to convolve this distribution with the Green's function of the potential, and a finite difference scheme is used to calculate the forces acting on the particles. The computation time scales as the Ng log Ng, where Ng is the size of the computational grid. In contrast, the particle-particle method's computing time relies on direct summation, so the time for each calculation is given by Np2, where Np is the number of particles. The particle-mesh method is best suited for simulations with a fixed minimum resolution and for collisionless systems, while hierarchical tree codes have proven to be superior for collisional systems where two-body interactions are important. Particle mesh methods still dominate in plasma physics where collisionless systems are modeled. The CM-200 Connection Machine produced by Thinking Machines Corp. is a data parallel system. On this system, the front-end computer controls the timing and execution of the parallel processing units. The programming paradigm is Single-Instruction, Multiple Data (SIMD). The processors on the CM-200 are connected in an N-dimensional hypercube; the largest number of links a message will ever have to make is N. As in all parallel computing, the efficiency of an algorithm is primarily determined by the fraction of the time spent communicating compared to that spent computing. Because of the topology of the processors, nearest neighbor communication is more efficient than general communication.