Stochastic Formal Correctness of Numerical Algorithms
NASA Technical Reports Server (NTRS)
Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick
2009-01-01
We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.
Numerical Algorithms Based on Biorthogonal Wavelets
NASA Technical Reports Server (NTRS)
Ponenti, Pj.; Liandrat, J.
1996-01-01
Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.
Mathematical model and numerical algorithm for aerodynamical flow
NASA Astrophysics Data System (ADS)
Shaydurov, V.; Shchepanovskaya, G.; Yakubovich, M.
2016-10-01
In the paper, a mathematical model and a numerical algorithm are proposed for modeling an air flow. The proposed model is based on the time-dependent Navier-Stokes equations for viscous heat-conducting gas. The energy equation and the state equations are modified to account for two kinds of `internal' energy. The first one is the usual translational and rotational energy of molecules which defines the thermodynamical temperature and the pressure. The second one is the subgrid energy of small turbulent eddies. A numerical algorithm is proposed for solving the formulated initial-boundary value problem as a combination of the semi-Lagrangian approximation for Lagrange transport derivatives and the conforming finite element method for other terms. A numerical example illustrates these approaches.
Analyzing milestoning networks for molecular kinetics: Definitions, algorithms, and examples
NASA Astrophysics Data System (ADS)
Viswanath, Shruthi; Kreuzer, Steven M.; Cardenas, Alfredo E.; Elber, Ron
2013-11-01
Network representations are becoming increasingly popular for analyzing kinetic data from techniques like Milestoning, Markov State Models, and Transition Path Theory. Mapping continuous phase space trajectories into a relatively small number of discrete states helps in visualization of the data and in dissecting complex dynamics to concrete mechanisms. However, not only are molecular networks derived from molecular dynamics simulations growing in number, they are also getting increasingly complex, owing partly to the growth in computer power that allows us to generate longer and better converged trajectories. The increased complexity of the networks makes simple interpretation and qualitative insight of the molecular systems more difficult to achieve. In this paper, we focus on various network representations of kinetic data and algorithms to identify important edges and pathways in these networks. The kinetic data can be local and partial (such as the value of rate coefficients between states) or an exact solution to kinetic equations for the entire system (such as the stationary flux between vertices). In particular, we focus on the Milestoning method that provides fluxes as the main output. We proposed Global Maximum Weight Pathways as a useful tool for analyzing molecular mechanism in Milestoning networks. A closely related definition was made in the context of Transition Path Theory. We consider three algorithms to find Global Maximum Weight Pathways: Recursive Dijkstra's, Edge-Elimination, and Edge-List Bisection. The asymptotic efficiency of the algorithms is analyzed and numerical tests on finite networks show that Edge-List Bisection and Recursive Dijkstra's algorithms are most efficient for sparse and dense networks, respectively. Pathways are illustrated for two examples: helix unfolding and membrane permeation. Finally, we illustrate that networks based on local kinetic information can lead to incorrect interpretation of molecular mechanisms.
Parallel processing of numerical transport algorithms
Wienke, B.R.; Hiromoto, R.E.
1984-01-01
The multigroup, discrete ordinates representation for the linear transport equation enjoys widespread computational use and popularity. Serial solution schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, we investigate the parallel structure and extension of a number of standard S/sub n/ approaches. Concurrent inner sweeps, coupled acceleration techniques, synchronized inner-outer loops, and chaotic iteration are described, and results of computations are contrasted. The multigroup representation and serial iteration methods are also detailed. The basic iterative S/sub n/ method lends itself to parallel tasking, portably affording an effective medium for performing transport calculations on future architectures. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. We find basic inner-outer and chaotic iteration strategies both easily support comparably high degrees of parallelism. Both accommodate parallel rebalance and diffusion acceleration and appear as robust and viable parallel techniques for S/sub n/ production work.
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical
Adaptive numerical algorithms in space weather modeling
NASA Astrophysics Data System (ADS)
Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2012-02-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit
A Numerical Algorithm for Finding Solution of Cross-Coupled Algebraic Riccati Equations
NASA Astrophysics Data System (ADS)
Mukaidani, Hiroaki; Yamamoto, Seiji; Yamamoto, Toru
In this letter, a computational approach for solving cross-coupled algebraic Riccati equations (CAREs) is investigated. The main purpose of this letter is to propose a new algorithm that combines Newton's method with a gradient-based iterative (GI) algorithm for solving CAREs. In particular, it is noteworthy that both a quadratic convergence under an appropriate initial condition and reduction in dimensions for matrix computation are both achieved. A numerical example is provided to demonstrate the efficiency of this proposed algorithm.
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1982-01-01
Numerical algorithms for large space structures were investigated with particular emphasis on decoupling method for analysis and design. Numerous aspects of the analysis of large systems ranging from the algebraic theory to lambda matrices to identification algorithms were considered. A general treatment of the algebraic theory of lambda matrices is presented and the theory is applied to second order lambda matrices.
Numerical Simulations and Diagnostics in Astrophysics:. a Few Magnetohydrodynamics Examples
NASA Astrophysics Data System (ADS)
Peres, Giovanni; Bonito, Rosaria; Orlando, Salvatore; Reale, Fabio
2007-12-01
We discuss some issues related to numerical simulations in Astrophysics and, in particular, to their use both as a theoretical tool and as a diagnostic tool, to gain insight into the physical phenomena at work. We make our point presenting some examples of Magneto-hydro-dynamic (MHD) simulations of astrophysical plasmas and illustrating their use. In particular we show the need for appropriate tools to interpret, visualize and present results in an adequate form, and the importance of spectral synthesis for a direct comparison with observations.
A Polynomial Time, Numerically Stable Integer Relation Algorithm
NASA Technical Reports Server (NTRS)
Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)
1998-01-01
Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.
Digital super-resolution microscopy using example-based algorithm
NASA Astrophysics Data System (ADS)
Ishikawa, Shinji; Hayasaki, Yoshio
2015-05-01
We propose a super-resolution microscopy with a confocal optical setup and an example-based algorithm. The example-based super-resolution algorithm was performed by an example database which is constructed by learning a lot of sets of a high-resolution patch and a low-resolution patch. The high-resolution patch is a part of the high-resolution image of an object model expressed in a computer, and the low-resolution patch is calculated from the high-resolution patch in consideration with a spatial property of an optical microscope. In the reconstruction process, a low-resolution image observed by the confocal optical setup with an image sensor is converted to the super-resolved high-resolution image selected by a pattern matching method from the example database. We demonstrate the adequate selection of the patch size and the weighting superposition method performs the super resolution with a low signal-to noise ratio.
Numerical comparison of Kalman filter algorithms - Orbit determination case study
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Thornton, C. L.
1977-01-01
Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.
A Numerical Instability in an ADI Algorithm for Gyrokinetics
E.A. Belli; G.W. Hammett
2004-12-17
We explore the implementation of an Alternating Direction Implicit (ADI) algorithm for a gyrokinetic plasma problem and its resulting numerical stability properties. This algorithm, which uses a standard ADI scheme to divide the field solve from the particle distribution function advance, has previously been found to work well for certain plasma kinetic problems involving one spatial and two velocity dimensions, including collisions and an electric field. However, for the gyrokinetic problem we find a severe stability restriction on the time step. Furthermore, we find that this numerical instability limitation also affects some other algorithms, such as a partially implicit Adams-Bashforth algorithm, where the parallel motion operator v{sub {parallel}} {partial_derivative}/{partial_derivative}z is treated implicitly and the field terms are treated with an Adams-Bashforth explicit scheme. Fully explicit algorithms applied to all terms can be better at long wavelengths than these ADI or partially implicit algorithms.
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1981-01-01
Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.
A numerical solution algorithm and its application to studies of pulsed light fields propagation
NASA Astrophysics Data System (ADS)
Banakh, V. A.; Gerasimova, L. O.; Smalikho, I. N.; Falits, A. V.
2016-08-01
A new method for studies of pulsed laser beams propagation in a turbulent atmosphere was proposed. The algorithm of numerical simulation is based on the solution of wave parabolic equation for complex spectral amplitude of wave field using method of splitting into physical factors. Examples of the use of the algorithm in the case the propagation pulsed Laguerre-Gaussian beams of femtosecond duration in the turbulence atmosphere has been shown.
A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials
Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A
2008-12-04
We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.
A hybrid artificial bee colony algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Alqattan, Zakaria N.; Abdullah, Rosni
2015-02-01
Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).
An efficient cuckoo search algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Ong, Pauline; Zainuddin, Zarita
2013-04-01
Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.
Multiresolution representation and numerical algorithms: A brief review
NASA Technical Reports Server (NTRS)
Harten, Amiram
1994-01-01
In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.
Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.
A novel bee swarm optimization algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush
2010-10-01
The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.
Numerical algorithms for the atomistic dopant profiling of semiconductor materials
NASA Astrophysics Data System (ADS)
Aghaei Anvigh, Samira
In this dissertation, we investigate the possibility to use scanning microscopy such as scanning capacitance microscopy (SCM) and scanning spreading resistance microscopy (SSRM) for the "atomistic" dopant profiling of semiconductor materials. For this purpose, we first analyze the discrete effects of random dopant fluctuations (RDF) on SCM and SSRM measurements with nanoscale probes and show that RDF significantly affects the differential capacitance and spreading resistance of the SCM and SSRM measurements if the dimension of the probe is below 50 nm. Then, we develop a mathematical algorithm to compute the spatial coordinates of the ionized impurities in the depletion region using a set of scanning microscopy measurements. The proposed numerical algorithm is then applied to extract the (x, y, z) coordinates of ionized impurities in the depletion region in the case of a few semiconductor materials with different doping configuration. The numerical algorithm developed to solve the above inverse problem is based on the evaluation of doping sensitivity functions of the differential capacitance, which show how sensitive the differential capacitance is to doping variations at different locations. To develop the numerical algorithm we first express the doping sensitivity functions in terms of the Gâteaux derivative of the differential capacitance, use Riesz representation theorem, and then apply a gradient optimization approach to compute the locations of the dopants. The algorithm is verified numerically using 2-D simulations, in which the C-V curves are measured at 3 different locations on the surface of the semiconductor. Although the cases studied in this dissertation are much idealized and, in reality, the C-V measurements are subject to noise and other experimental errors, it is shown that if the differential capacitance is measured precisely, SCM measurements can be potentially used for the "atomistic" profiling of ionized impurities in doped semiconductors.
Numerical Laplace Transform Inversion Employing the Gaver-Stehfest Algorithm.
ERIC Educational Resources Information Center
Jacquot, Raymond G.; And Others
1985-01-01
Presents a technique for the numerical inversion of Laplace Transforms and several examples employing this technique. Limitations of the method in terms of available computer word length and the effects of these limitations on approximate inverse functions are also discussed. (JN)
CARVE--a constructive algorithm for real-valued examples.
Young, S; Downs, T
1998-01-01
A constructive neural-network algorithm is presented. For any consistent classification task on real-valued training vectors, the algorithm constructs a feedforward network with a single hidden layer of threshold units which implements the task. The algorithm, which we call CARVE, extends the "sequential learning" algorithm of Marchand et al. from Boolean inputs to the real-valued input case, and uses convex hull methods for the determination of the network weights. The algorithm is an efficient training scheme for producing near-minimal network solutions for arbitrary classification tasks. The algorithm is applied to a number of benchmark problems including Gorman and Sejnowski's sonar data, the Monks problems and Fisher's iris data. A significant application of the constructive algorithm is in providing an initial network topology and initial weights for other neural-network training schemes and this is demonstrated by application to backpropagation.
An algorithm for the numerical solution of linear differential games
Polovinkin, E S; Ivanov, G E; Balashov, M V; Konstantinov, R V; Khorev, A V
2001-10-31
A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented and estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
Algorithms for the Fractional Calculus: A Selection of Numerical Methods
NASA Technical Reports Server (NTRS)
Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Yu.
2003-01-01
Many recently developed models in areas like viscoelasticity, electrochemistry, diffusion processes, etc. are formulated in terms of derivatives (and integrals) of fractional (non-integer) order. In this paper we present a collection of numerical algorithms for the solution of the various problems arising in this context. We believe that this will give the engineer the necessary tools required to work with fractional models in an efficient way.
Predictive Lateral Logic for Numerical Entry Guidance Algorithms
NASA Technical Reports Server (NTRS)
Smith, Kelly M.
2016-01-01
Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.
Algorithm-Based Fault Tolerance for Numerical Subroutines
NASA Technical Reports Server (NTRS)
Tumon, Michael; Granat, Robert; Lou, John
2007-01-01
A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.
Understanding disordered systems through numerical simulation and algorithm development
NASA Astrophysics Data System (ADS)
Sweeney, Sean Michael
Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising
Improvements in algorithms for phenotype inference: the NAT2 example.
Selinski, Silvia; Blaszkewicz, Meinolf; Ickstadt, Katja; Hengstler, Jan G; Golka, Klaus
2014-02-01
Numerous studies have analyzed the impact of N-acetyltransferase 2 (NAT2) polymorphisms on drug efficacy, side effects as well as cancer risk. Here, we present the state of the art of deriving haplotypes from polymorphisms and discuss the available software. PHASE v2.1 is currently considered a gold standard for NAT2 haplotype assignment. In vitro studies have shown that some slow acetylation genotypes confer reduced protein stability. This has been observed particularly for G191A, T341C and G590A. Substantial ethnic variations of the acetylation status have been described. Probably, upcoming agriculture and the resulting change in diet caused a selection pressure for slow acetylation. In recent years much research has been done to reduce the complexity of NAT2 genotyping. Deriving the haplotype from seven SNPs is still considered a gold standard. However, meanwhile several studies have shown that a two-SNP combination, C282T and T341C, results in a similarly good distinction in Caucasians. However, attempts to further reduce complexity to only one 'tagging SNP' (rs1495741) may lead to wrong predictions where phenotypically slow acetylators were genotyped as intermediate or rapid. Numerous studies have shown that slow NAT2 haplotypes are associated with increased urinary bladder cancer risk and increased risk of anti-tuberculosis drug-induced hepatotoxicity. A drawback of the current practice of solely discriminating slow, intermediate and rapid genotypes for phenotype inference is limited resolution of differences between slow acetylators. Future developments to differentiate between slow and ultra-slow genotypes may further improve individualized drug dosing and epidemiological studies of cancer risk.
Verifying Algorithms for Autonomous Aircraft by Simulation Generalities and Example
NASA Technical Reports Server (NTRS)
White, Allan L.
2010-01-01
An open question in Air Traffic Management is what procedures can be validated by simulation where the simulation shows that the probability of undesirable events is below the required level at some confidence level. The problem is including enough realism to be convincing while retaining enough efficiency to run the large number of trials needed for high confidence. The paper first examines the probabilistic interpretation of a typical requirement by a regulatory agency and computes the number of trials needed to establish the requirement at an equivalent confidence level. Since any simulation is likely to consider only one type of event and there are several types of events, the paper examines under what conditions this separate consideration is valid. The paper establishes a separation algorithm at the required confidence level where the aircraft operates under feedback control as is subject to perturbations. There is a discussion where it is shown that a scenario three of four orders of magnitude more complex is feasible. The question of what can be validated by simulation remains open, but there is reason to be optimistic.
Linsen, Sarah; Torbeyns, Joke; Verschaffel, Lieven; Reynvoet, Bert; De Smedt, Bert
2016-03-01
There are two well-known computation methods for solving multi-digit subtraction items, namely mental and algorithmic computation. It has been contended that mental and algorithmic computation differentially rely on numerical magnitude processing, an assumption that has already been examined in children, but not yet in adults. Therefore, in this study, we examined how numerical magnitude processing was associated with mental and algorithmic computation, and whether this association with numerical magnitude processing was different for mental versus algorithmic computation. We also investigated whether the association between numerical magnitude processing and mental and algorithmic computation differed for measures of symbolic versus nonsymbolic numerical magnitude processing. Results showed that symbolic, and not nonsymbolic, numerical magnitude processing was associated with mental computation, but not with algorithmic computation. Additional analyses showed, however, that the size of this association with symbolic numerical magnitude processing was not significantly different for mental and algorithmic computation. We also tried to further clarify the association between numerical magnitude processing and complex calculation by also including relevant arithmetical subskills, i.e. arithmetic facts, needed for complex calculation that are also known to be dependent on numerical magnitude processing. Results showed that the associations between symbolic numerical magnitude processing and mental and algorithmic computation were fully explained by individual differences in elementary arithmetic fact knowledge. PMID:26914586
A stable and efficient numerical algorithm for unconfined aquifer analysis
Keating, Elizabeth; Zyvoloski, George
2008-01-01
The non-linearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of forward model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency, and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to solution of Richard's Equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem, as well.
NASA Astrophysics Data System (ADS)
Chernyaev, Yu. A.
2016-03-01
A numerical algorithm for minimizing a convex function on a smooth surface is proposed. The algorithm is based on reducing the original problem to a sequence of convex programming problems. Necessary extremum conditions are examined, and the convergence of the algorithm is analyzed.
Topics in Randomized Algorithms for Numerical Linear Algebra
NASA Astrophysics Data System (ADS)
Holodnak, John T.
In this dissertation, we present results for three topics in randomized algorithms. Each topic is related to random sampling. We begin by studying a randomized algorithm for matrix multiplication that randomly samples outer products. We show that if a set of deterministic conditions is satisfied, then the algorithm can compute the exact product. In addition, we show probabilistic bounds on the two norm relative error of the algorithm. two norm relative error of the algorithm. In the second part, we discuss the sensitivity of leverage scores to perturbations. Leverage scores are scalar quantities that give a notion of importance to the rows of a matrix. They are used as sampling probabilities in many randomized algorithms. We show bounds on the difference between the leverage scores of a matrix and a perturbation of the matrix. In the last part, we approximate functions over an active subspace of parameters. To identify the active subspace, we apply an algorithm that relies on a random sampling scheme. We show bounds on the accuracy of the active subspace identification algorithm and construct an approximation to a function with 3556 parameters using a ten-dimensional active subspace.
Analysis of the numerical effects of parallelism on a parallel genetic algorithm
Hart, W.E.; Belew, R.K.; Kohn, S.; Baden, S.
1995-09-18
This paper examines the effects of relaxed synchronization on both the numerical and parallel efficiency of parallel genetic algorithms (GAs). We describe a coarse-grain geographically structured parallel genetic algorithm. Our experiments show that asynchronous versions of these algorithms have a lower run time than-synchronous GAs. Furthermore, we demonstrate that this improvement in performance is partly due to the fact that the numerical efficiency of the asynchronous genetic algorithm is better than the synchronous genetic algorithm. Our analysis includes a critique of the utility of traditional parallel performance measures for parallel GAs, and we evaluate the claims made by several researchers that parallel GAs can have superlinear speedup.
A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1976-01-01
The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.
Neural net algorithms that learn in polynomial time from examples and queries.
Baum, E B
1991-01-01
An algorithm which trains networks using examples and queries is proposed. In a query, the algorithm supplies a y and is told t(y) by an oracle. Queries appear to be available in practice for most problems of interest, e.g. by appeal to a human expert. The author's algorithm is proved to PAC learn in polynomial time the class of target functions defined by layered, depth two, threshold nets having n inputs connected to k hidden threshold units connected to one or more output units, provided k=/<4. While target functions and input distributions can be described for which the algorithm will fail for larger k, it appears likely to work well in practice. Tests of a variant of the algorithm have consistently and rapidly learned random nets of this type. Computational efficiency figures are given. The algorithm can also be proved to learn intersections of k half-spaces in R(n) in time polynomial in both n and k. A variant of the algorithm can learn arbitrary depth layered threshold networks with n inputs and k units in the first hidden layer in time polynomial in the larger of n and k but exponential in the smaller of the two.
Three-Dimensional SIP Imaging of Rock Core Sample: Numerical Examples
NASA Astrophysics Data System (ADS)
Son, J.; Kim, J.; Yi, M.
2007-12-01
apply developed inversion to the simulated SIP responses. Although the number of electrode is limited to 16, we can clearly see the conductive and reactive anomalous zone from the inverted results. For we have finished the development of numerical modeling and inversion algorithms, we are going to apply it to the results of laboratory test results in the near future. We hope that our developed algorithm could image the change of physical property during the CO2 injection into the rock sample.
NASA Astrophysics Data System (ADS)
Mattei, D.; Smith, I.; Ferrari, A.; Carbillet, M.
2010-10-01
Post-processing for exoplanet detection using direct imaging requires large data cubes and/or sophisticated signal processing technics. For alt-azimuthal mounts, a projection effect called field rotation makes the potential planet rotate in a known manner on the set of images. For ground based telescopes that use extreme adaptive optics and advanced coronagraphy, technics based on field rotation are already broadly used and still under progress. In most such technics, for a given initial position of the planet the planet intensity estimate is a linear function of the set of images. However, due to field rotation the modified instrumental response applied is not shift invariant like usual linear filters. Testing all possible initial positions is therefore very time-consuming. To reduce the time process, we propose to deal with each subset of initial positions computed on a different machine using parallelization programming. In particular, the MOODS algorithm dedicated to the VLT-SPHERE instrument, that estimates jointly the light contributions of the star and the potential exoplanet, is parallelized on the Observatoire de la Cote d'Azur cluster. Different parallelization methods (OpenMP, MPI, Jobs Array) have been elaborated for the initial MOODS code and compared to each other. The one finally chosen splits the initial positions on the processors available by accounting at best for the different constraints of the cluster structure: memory, job submission queues, number of available CPUs, cluster average load. At the end, a standard set of images is satisfactorily processed in a few hours instead of a few days.
Numerical Optimization Algorithms and Software for Systems Biology
Saunders, Michael
2013-02-02
The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.
Chaotic algorithms: A numerical exploration of the dynamics of a stiff photoconductor model
Markus, A.S. de
1997-04-01
The photoconducting property of semiconductors leads, in general, to a very complex kinetics for the charge carriers due to the non-equilibrium processes involved. In a semi-conductor with one type of trap, the dynamics of the photo-conducting process are described by a set of ordinary coupled non-linear differential equations given by where n and p are the free electron and hole densities, and m the trapped electron density at time t. So far, there is no known closed solution for these set of non-linear differential equations, and therefore, numerical integration techniques have to be employed, as, for example, the standard procedure of the Runge-Kutta (RK) method. Now then, each one of the mechanisms of generation, recombination, and trapping has its own lifetime, which means that different time constants are to be expected in the time dependent behavior of the photocurrent. Thus, depending on the parameters of the model, the system may become stiff if the time scales between n, m, and p separate considerably. This situation may impose a considerable stress upon a fixed step numerical algorithm as the RK, which may produce then unreliable results, and other methods have to be considered. Therefore, the purpose of this note is to examine, for a critical range of parameters, the results of the numerical integration of the stiff system obtained by standard numerical schemes, such as the single-step fourth-order Runge-Kutta method and the multistep Gear method, the latter being appropriate for a rigid system of equations. 7 refs., 2 figs.
Numerical comparison of discrete Kalman filter algorithms - Orbit determination case study
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Thornton, C. L.
1976-01-01
Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics.
Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm
Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving
2014-02-01
The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.
A bibliography on parallel and vector numerical algorithms
NASA Technical Reports Server (NTRS)
Ortega, J. M.; Voigt, R. G.
1987-01-01
This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also.
A bibliography on parallel and vector numerical algorithms
NASA Technical Reports Server (NTRS)
Ortega, James M.; Voigt, Robert G.; Romine, Charles H.
1988-01-01
This is a bibliography on numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are also listed.
On vortex loops and filaments: three examples of numerical predictions of flows containing vortices.
Krause, Egon
2003-01-01
Vortex motion plays a dominant role in many flow problems. This article aims at demonstrating some of the characteristic features of vortices with the aid of numerical solutions of the governing equations of fluid mechanics, the Navier-Stokes equations. Their discretized forms will first be reviewed briefly. Thereafter three problems of fluid flow involving vortex loops and filaments are discussed. In the first, the time-dependent motion and the mutual interaction of two colliding vortex rings are discussed, predicted in good agreement with experimental observations. The second example shows how vortex rings are generated, move, and interact with each other during the suction stroke in the cylinder of an automotive engine. The numerical results, validated with experimental data, suggest that vortex rings can be used to influence the spreading of the fuel droplets prior to ignition and reduce the fuel consumption. In the third example, it is shown that vortices can also occur in aerodynamic flows over delta wings at angle of attack as well as pipe flows: of particular interest for technical applications of these flows is the situation in which the vortex cores are destroyed, usually referred to as vortex breakdown or bursting. Although reliable breakdown criteria could not be established as yet, the numerical predictions obtained so far are found to agree well with the few experimental data available in the recent literature.
Fourier analysis of numerical algorithms for the Maxwell equations
NASA Technical Reports Server (NTRS)
Liu, Yen
1993-01-01
The Fourier method is used to analyze the dispersive, dissipative, and isotropy errors of various spatial and time discretizations applied to the Maxwell equations on multi-dimensional grids. Both Cartesian grids and non-Cartesian grids based on hexagons and tetradecahedra are studied and compared. The numerical errors are quantitatively determined in terms of phase speed, wave number, propagation direction, gridspacings, and CFL number. The study shows that centered schemes are more efficient than upwind schemes. The non-Cartesian grids yield superior isotropy and higher accuracy than the Cartesian ones. For the centered schemes, the staggered grids produce less errors than the unstaggered ones. A new unstaggered scheme which has all the best properties is introduced. The study also demonstrates that a proper choice of time discretization can reduce the overall numerical errors due to the spatial discretization.
An Example-Based Super-Resolution Algorithm for Selfie Images.
William, Jino Hans; Venkateswaran, N; Narayanan, Srinath; Ramachandran, Sandeep
2016-01-01
A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details.
An Example-Based Super-Resolution Algorithm for Selfie Images
William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep
2016-01-01
A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500
Stochastic algorithms for the analysis of numerical flame simulations
Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.
2001-12-14
Recent progress in simulation methodologies and new, high-performance parallel architectures have made it is possible to perform detailed simulations of multidimensional combustion phenomena using comprehensive kinetics mechanisms. However, as simulation complexity increases, it becomes increasingly difficult to extract detailed quantitative information about the flame from the numerical solution, particularly regarding the details of chemical processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of combustion phenomena. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian viewpoint in which we follow the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system. From this perspective an ''atom'' is part of some molecule that is transported through the domain by advection and diffusion. Reactions ca use the atom to shift from one species to another with the subsequent transport given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion as a suitable random-walk process. Within this probabilistic framework, reactions can be viewed as a Markov process transforming molecule to molecule with given probabilities. In this paper, we discuss the numerical issues in more detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. We also illustrate how the method can be applied to studying the role of cyanochemistry on NOx production in a diffusion flame.
Stochastic algorithms for the analysis of numerical flame simulations
Bell, John B.; Day, Marcus S.; Grcar, Joseph F.; Lijewski, Michael J.
2004-04-26
Recent progress in simulation methodologies and high-performance parallel computers have made it is possible to perform detailed simulations of multidimensional reacting flow phenomena using comprehensive kinetics mechanisms. As simulations become larger and more complex, it becomes increasingly difficult to extract useful information from the numerical solution, particularly regarding the interactions of the chemical reaction and diffusion processes. In this paper we present a new diagnostic tool for analysis of numerical simulations of reacting flow. Our approach is based on recasting an Eulerian flow solution in a Lagrangian frame. Unlike a conventional Lagrangian view point that follows the evolution of a volume of the fluid, we instead follow specific chemical elements, e.g., carbon, nitrogen, etc., as they move through the system . From this perspective an ''atom'' is part of some molecule of a species that is transported through the domain by advection and diffusion. Reactions cause the atom to shift from one chemical host species to another and the subsequent transport of the atom is given by the movement of the new species. We represent these processes using a stochastic particle formulation that treats advection deterministically and models diffusion and chemistry as stochastic processes. In this paper, we discuss the numerical issues in detail and demonstrate that an ensemble of stochastic trajectories can accurately capture key features of the continuum solution. The capabilities of this diagnostic are then demonstrated by applications to study the modulation of carbon chemistry during a vortex-flame interaction, and the role of cyano chemistry in rm NO{sub x} production for a steady diffusion flame.
NASA Astrophysics Data System (ADS)
Kojic, M.; Mijailovic, S.; Zdravkovic, N.
Complex behaviour of connective tissue can be modeled by a fiber-fiber kinetics material model introduced in Mijailovic (1991), Mijailovic et al. (1993). The model is based on the hypothesis of sliding of elastic fibers with Coulomb and viscous friction. The main characteristics of the model were verified experimentally in Mijailovic (1991), and a numerical procedure for one-dimensional tension was developed considering sliding as a contact problem between bodies. In this paper we propose a new and general numerical procedure for calculation of the stress-strain law of the fiber-fiber kinetics model in case of Coulomb friction. Instead of using a contact algorithm (Mijailovic 1991), which is numerically inefficient and never enough reliable, here the history of sliding along the sliding length is traced numerically through a number of segments along the fiber. The algorithm is simple, efficient and reliable and provides solutions for arbitrary cyclic loading, including tension, shear, and tension and shear simultaneously, giving hysteresis loops typical for soft tissue response. The model is built in the finite element technique, providing the possibility of its application to general and real problems. Solved examples illustrate the main characteristics of the model and of the developed numerical method, as well as its applicability to practical problems. Accuracy of some results, for the simple case of uniaxial loading, is verified by comparison with analytical solutions.
Thrombosis modeling in intracranial aneurysms: a lattice Boltzmann numerical algorithm
NASA Astrophysics Data System (ADS)
Ouared, R.; Chopard, B.; Stahl, B.; Rüfenacht, D. A.; Yilmaz, H.; Courbebaisse, G.
2008-07-01
The lattice Boltzmann numerical method is applied to model blood flow (plasma and platelets) and clotting in intracranial aneurysms at a mesoscopic level. The dynamics of blood clotting (thrombosis) is governed by mechanical variations of shear stress near wall that influence platelets-wall interactions. Thrombosis starts and grows below a shear rate threshold, and stops above it. Within this assumption, it is possible to account qualitatively well for partial, full or no occlusion of the aneurysm, and to explain why spontaneous thrombosis is more likely to occur in giant aneurysms than in small or medium sized aneurysms.
Copps, Kevin D.; Carnes, Brian R.
2008-04-01
We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.
Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning
NASA Astrophysics Data System (ADS)
Bradley, Ben K.
Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and
Comparison of Fully Numerical Predictor-Corrector and Apollo Skip Entry Guidance Algorithms
NASA Astrophysics Data System (ADS)
Brunner, Christopher W.; Lu, Ping
2012-09-01
The dramatic increase in computational power since the Apollo program has enabled the development of numerical predictor-corrector (NPC) entry guidance algorithms that allow on-board accurate determination of a vehicle's trajectory. These algorithms are sufficiently mature to be flown. They are highly adaptive, especially in the face of extreme dispersion and off-nominal situations compared with reference-trajectory following algorithms. The performance and reliability of entry guidance are critical to mission success. This paper compares the performance of a recently developed fully numerical predictor-corrector entry guidance (FNPEG) algorithm with that of the Apollo skip entry guidance. Through extensive dispersion testing, it is clearly demonstrated that the Apollo skip entry guidance algorithm would be inadequate in meeting the landing precision requirement for missions with medium (4000-7000 km) and long (>7000 km) downrange capability requirements under moderate dispersions chiefly due to poor modeling of atmospheric drag. In the presence of large dispersions, a significant number of failures occur even for short-range missions due to the deviation from planned reference trajectories. The FNPEG algorithm, on the other hand, is able to ensure high landing precision in all cases tested. All factors considered, a strong case is made for adopting fully numerical algorithms for future skip entry missions.
Structural algorithm to reservoir reconstruction using passive seismic data (synthetic example)
Smaglichenko, Tatyana A.; Volodin, Igor A.; Lukyanitsa, Andrei A.; Smaglichenko, Alexander V.; Sayankina, Maria K.
2012-09-26
Using of passive seismic observations to detect a reservoir is a new direction of prospecting and exploration of hydrocarbons. In order to identify thin reservoir model we applied the modification of Gaussian elimination method in conditions of incomplete synthetic data. Because of the singularity of a matrix conventional method does not work. Therefore structural algorithm has been developed by analyzing the given model as a complex model. Numerical results demonstrate of its advantage compared with usual way of solution. We conclude that the gas reservoir is reconstructed by retrieving of the image of encasing shale beneath it.
Variationally consistent discretization schemes and numerical algorithms for contact problems
NASA Astrophysics Data System (ADS)
Wohlmuth, Barbara
We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of
Numerical Algorithms for Acoustic Integrals - The Devil is in the Details
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.
1996-01-01
The accurate prediction of the aeroacoustic field generated by aerospace vehicles or nonaerospace machinery is necessary for designers to control and reduce source noise. Powerful computational aeroacoustic methods, based on various acoustic analogies (primarily the Lighthill acoustic analogy) and Kirchhoff methods, have been developed for prediction of noise from complicated sources, such as rotating blades. Both methods ultimately predict the noise through a numerical evaluation of an integral formulation. In this paper, we consider three generic acoustic formulations and several numerical algorithms that have been used to compute the solutions to these formulations. Algorithms for retarded-time formulations are the most efficient and robust, but they are difficult to implement for supersonic-source motion. Collapsing-sphere and emission-surface formulations are good alternatives when supersonic-source motion is present, but the numerical implementations of these formulations are more computationally demanding. New algorithms - which utilize solution adaptation to provide a specified error level - are needed.
NASA Astrophysics Data System (ADS)
Kim, J.; Sonnenthal, E. L.; Rutqvist, J.
2011-12-01
Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator
Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.
Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709
Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.
Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709
Generalized Diffuse Field Within a 2d Alluvial Basin: a Numerical Example
NASA Astrophysics Data System (ADS)
Molina Villegas, J.; Baena, M.; Piña, J.; Perton, M.; Suarez, M.; Sanchez-Sesma, F. J.
2013-05-01
Since the pioneering work of Aki (1957), the seismic noise has been used to infer the wave velocity distribution of soil formations. Later, diffuse-field concepts from room acoustics began to be used in elastodynamics by Weaver (1982) and flourished in many applications thanks to the contributions of Campillo and coworkers. It was established that diffusion like regimes are obtained when the field is produced by equipartitioned, uniform illumination. Within an elastodynamic diffuse-field the average correlation of the displacement field between two stations is proportional to the Green function of the system for those points. Usually, the surface waves can be interpreted by means of the retrieved Green function, from which very important information about the properties in depth can be obtained. Seismic noise and coda are frequently considered as diffuse-fields. This assumption is well supported by ideas of multiple scattering of waves and the resultant energy equipartition. There are few examples of numerically generated diffuse-fields. Some are based on random distributed forces (e.g. Sánchez-Sesma et al., 2006), while others used a set of plane waves with varying incidence angles and polarization (e.g. Sánchez-Sesma and Campillo 2006; Kawase et al. 2011). In this work we generate numerically a diffuse field within the Kawase and Aki (1989) 2D model using a random set of independent and uncorrelated incident plane P, SV and Rayleigh waves. For the simulations we use the indirect boundary element method (IBEM). Thus, we obtained the Green function for pairs of receivers by averaging correlations between different stations on the surface. In order to validate our results we compute the model's Green function as the response for a unit point load using the IBEM. Our numerical experiment provides guidelines for actual calculations of earthquakes in real alluvial basins.
Efficient algorithms for numerical simulation of the motion of earth satellites
NASA Astrophysics Data System (ADS)
Bordovitsyna, T. V.; Bykova, L. E.; Kardash, A. V.; Fedyaev, Yu. A.; Sharkovskii, N. A.
1992-08-01
We briefly present our results obtained during the development and an investigation of the efficacy of algorithms for numerical prediction of the motion of earth satellites (ESs) using computers of different power. High accuracy and efficiency in predicting ES motion are achieved by using higher-order numerical methods, transformations that regularize and stabilize the equations of motion, and a high-precision model of the forces acting on an ES. This approach enables us to construct efficient algorithms of the required accuracy, both for universal computers with a large RAM and for personal computers with very limited capacity.
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
Wang, Peng; Zhu, Zhouquan; Huang, Shuai
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879
Wang, Peng; Zhu, Zhouquan; Huang, Shuai
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.
A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications
NASA Technical Reports Server (NTRS)
Povitsky, Alex; Morris, Philip J.
1999-01-01
In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.
NASA Astrophysics Data System (ADS)
Kelly, Patrick M.; Cannon, T. Michael; Hush, Donald R.
1995-03-01
CANDID (comparison algorithm for navigating digital image databases) was developed to enable content-based retrieval of digital imagery from large databases using a query-by- example methodology. A user provides an example image to the system, and images in the database that are similar to that example are retrieved. The development of CANDID was inspired by the N-gram approach to document fingerprinting, where a `global signature' is computed for every document in a database and these signatures are compared to one another to determine the similarity between any two documents. CANDID computes a global signature for every image in a database, where the signature is derived from various image features such as localized texture, shape, or color information. A distance between probability density functions of feature vectors is then used to compare signatures. In this paper, we present CANDID and highlight two results from our current research: subtracting a `background' signature from every signature in a database in an attempt to improve system performance when using inner-product similarity measures, and visualizing the contribution of individual pixels in the matching process. These ideas are applicable to any histogram-based comparison technique.
Mikesell, T. Dylan; Malcolm, Alison E.; Yang, Di; Haney, Matthew M.
2015-01-01
Time-shift estimation between arrivals in two seismic traces before and after a velocity perturbation is a crucial step in many seismic methods. The accuracy of the estimated velocity perturbation location and amplitude depend on this time shift. Windowed cross correlation and trace stretching are two techniques commonly used to estimate local time shifts in seismic signals. In the work presented here, we implement Dynamic Time Warping (DTW) to estimate the warping function – a vector of local time shifts that globally minimizes the misfit between two seismic traces. We illustrate the differences of all three methods compared to one another using acoustic numerical experiments. We show that DTW is comparable to or better than the other two methods when the velocity perturbation is homogeneous and the signal-to-noise ratio is high. When the signal-to-noise ratio is low, we find that DTW and windowed cross correlation are more accurate than the stretching method. Finally, we show that the DTW algorithm has better time resolution when identifying small differences in the seismic traces for a model with an isolated velocity perturbation. These results impact current methods that utilize not only time shifts between (multiply) scattered waves, but also amplitude and decoherence measurements. DTW is a new tool that may find new applications in seismology and other geophysical methods (e.g., as a waveform inversion misfit function).
Nguyen, Tam H.; Song, Junho; Paulino, Glaucio H.
2008-02-15
Probabilistic fracture analyses are performed for investigating uncertain fracture response of Functionally Graded Material (FGM) structures. The First-Order-Reliability-Method (FORM) is implemented into an existing Finite Element code for FGM (FE-FGM), which was previously developed at the University of Illinois at Urbana-Champaign. The computational simulation will be used in order to estimate the probability of crack initiation with uncertainties in the material properties only. The two-step probability analysis method proposed in the companion paper is illustrated by a numerical example of a composite strip with an edge crack. First, the reliability index of a crack initiation event is estimated as we vary the mean and standard deviation of the slope and the location of the inflection point of the spatial profile of Young's modulus. Secondly, the reliability index is estimated as we vary the standard deviation and the correlation length of the random field that characterize the random spatial fluctuation of Young's modulus. Also investigated is the relative importance of the uncertainties in the toughness compared to those in Young's modulus.
An adaptive numeric predictor-corrector guidance algorithm for atmospheric entry vehicles
NASA Astrophysics Data System (ADS)
Spratlin, Kenneth Milton
1987-05-01
An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.
PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release
NASA Astrophysics Data System (ADS)
Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.
2016-09-01
The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.
On the impact of communication complexity on the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D. B.; Van Rosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.
NASA Technical Reports Server (NTRS)
Nacozy, P. E.
1984-01-01
The equations of motion are developed for a perfectly flexible, inelastic tether with a satellite at its extremity. The tether is attached to a space vehicle in orbit. The tether is allowed to possess electrical conductivity. A numerical solution algorithm to provide the motion of the tether and satellite system is presented. The resulting differential equations can be solved by various existing standard numerical integration computer programs. The resulting differential equations allow the introduction of approximations that can lead to analytical, approximate general solutions. The differential equations allow more dynamical insight of the motion.
François, Marianne M.
2015-05-28
A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges.more » In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.« less
François, Marianne M.
2015-05-28
A review of recent advances made in numerical methods and algorithms within the volume tracking framework is presented. The volume tracking method, also known as the volume-of-fluid method has become an established numerical approach to model and simulate interfacial flows. Its advantage is its strict mass conservation. However, because the interface is not explicitly tracked but captured via the material volume fraction on a fixed mesh, accurate estimation of the interface position, its geometric properties and modeling of interfacial physics in the volume tracking framework remain difficult. Several improvements have been made over the last decade to address these challenges. In this study, the multimaterial interface reconstruction method via power diagram, curvature estimation via heights and mean values and the balanced-force algorithm for surface tension are highlighted.
Analysis of V-cycle multigrid algorithms for forms defined by numerical quadrature
Bramble, J.H. . Dept. of Mathematics); Goldstein, C.I.; Pasciak, J.E. . Applied Mathematics Dept.)
1994-05-01
The authors describe and analyze certain V-cycle multigrid algorithms with forms defined by numerical quadrature applied to the approximation of symmetric second-order elliptic boundary value problems. This approach can be used for the efficient solution of finite element systems resulting from numerical quadrature as well as systems arising from finite difference discretizations. The results are based on a regularity free theory and hence apply to meshes with local grid refinement as well as the quasi-uniform case. It is shown that uniform (independent of the number of levels) convergence rates often hold for appropriately defined V-cycle algorithms with as few as one smoothing per grid. These results hold even on applications without full elliptic regularity, e.g., a domain in R[sup 2] with a crack.
Thickness determination in textile material design: dynamic modeling and numerical algorithms
NASA Astrophysics Data System (ADS)
Xu, Dinghua; Ge, Meibao
2012-03-01
Textile material design is of paramount importance in the study of functional clothing design. It is therefore important to determine the dynamic heat and moisture transfer characteristics in the human body-clothing-environment system, which directly determine the heat-moisture comfort level of the human body. Based on a model of dynamic heat and moisture transfer with condensation in porous fabric at low temperature, this paper presents a new inverse problem of textile thickness determination (IPTTD). Adopting the idea of the least-squares method, we formulate the IPTTD into a function minimization problem. By means of the finite-difference method, quasi-solution method and direct search method for one-dimensional minimization problems, we construct iterative algorithms of the approximated solution for the IPTTD. Numerical simulation results validate the formulation of the IPTTD and demonstrate the effectiveness of the proposed numerical algorithms.
Numerical advection algorithms and their role in atmospheric transport and chemistry models
NASA Technical Reports Server (NTRS)
Rood, Richard B.
1987-01-01
During the last 35 years, well over 100 algorithms for modeling advection processes have been described and tested. This review summarizes the development and improvements that have taken place. The nature of the errors caused by numerical approximation to the advection equation are highlighted. Then the particular devices that have been proposed to remedy these errors are discussed. The extensive literature comparing transport algorithms is reviewed. Although there is no clear cut 'best' algorithm, several conclusions can be made. Spectral and pseudospectral techniques consistently provide the highest degree of accuracy, but expense and difficulties assuring positive mixing ratios are serious drawbacks. Schemes which consider fluid slabs bounded by grid points (volume schemes), rather than the simple specification of constituent values at the grid points, provide accurate positive definite results.
Numerical advection algorithms and their role in atmospheric transport and chemistry models
NASA Astrophysics Data System (ADS)
Rood, Richard B.
1987-02-01
During the last 35 years, well over 100 algorithms for modeling advection processes have been described and tested. This review summarizes the development and improvements that have taken place. The nature of the errors caused by numerical approximation to the advection equation are highlighted. Then the particular devices that have been proposed to remedy these errors are discussed. The extensive literature comparing transport algorithms is reviewed. Although there is no clear cut 'best' algorithm, several conclusions can be made. Spectral and pseudospectral techniques consistently provide the highest degree of accuracy, but expense and difficulties assuring positive mixing ratios are serious drawbacks. Schemes which consider fluid slabs bounded by grid points (volume schemes), rather than the simple specification of constituent values at the grid points, provide accurate positive definite results.
NASA Astrophysics Data System (ADS)
Fedoseyev, A.; Kansa, E. J.; Tsynkov, S.; Petropavlovskiy, S.; Osintcev, M.; Shumlak, U.; Henshaw, W. D.
2016-10-01
We present the implementation of the Lacuna method, that removes a key diffculty that currently hampers many existing methods for computing unsteady electromagnetic waves on unbounded regions. Numerical accuracy and/or stability may deterio-rate over long times due to the treatment of artificial outer boundaries. We describe a developed universal algorithm and software that correct this problem by employing the Huygens' principle and lacunae of Maxwell's equations. The algorithm provides a temporally uniform guaranteed error bound (no deterioration at all), and the software will enable robust electromagnetic simulations in a high-performance computing environment. The methodology applies to any geometry, any scheme, and any boundary condition. It eliminates the long-time deterioration regardless of its origin and how it manifests itself. In retrospect, the lacunae method was first proposed by V. Ryaben'kii and subsequently developed by S. Tsynkov. We have completed development of an innovative numerical methodology for high fidelity error-controlled modeling of a broad variety of electromagnetic and other wave phenomena. Proof-of-concept 3D computations have been conducted that con-vincingly demonstrate the feasibility and effciency of the proposed approach. Our algorithms are being implemented as robust commercial software tools in a standalone module to be combined with existing numerical schemes in several widely used computational electromagnetic codes.
NASA Astrophysics Data System (ADS)
Alfonso, Lester; Zamora, Jose; Cruz, Pedro
2015-04-01
The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-01-01
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
Rayleigh Wave Numerical Dispersion in a 3D Finite-Difference Algorithm
NASA Astrophysics Data System (ADS)
Preston, L. A.; Aldridge, D. F.
2010-12-01
A Rayleigh wave propagates laterally without dispersion in the vicinity of the plane stress-free surface of a homogeneous and isotropic elastic halfspace. The phase speed is independent of frequency and depends only on the Poisson ratio of the medium. However, after temporal and spatial discretization, a Rayleigh wave simulated by a 3D staggered-grid finite-difference (FD) seismic wave propagation algorithm suffers from frequency- and direction-dependent numerical dispersion. The magnitude of this dispersion depends critically on FD algorithm implementation details. Nevertheless, proper gridding can control numerical dispersion to within an acceptable level, leading to accurate Rayleigh wave simulations. Many investigators have derived dispersion relations appropriate for body wave propagation by various FD algorithms. However, the situation for surface waves is less well-studied. We have devised a numerical search procedure to estimate Rayleigh phase speed and group speed curves for 3D O(2,2) and O(2,4) staggered-grid FD algorithms. In contrast with the continuous time-space situation (where phase speed is obtained by extracting the appropriate root of the Rayleigh cubic), we cannot develop a closed-form mathematical formula governing the phase speed. Rather, we numerically seek the particular phase speed that leads to a solution of the discrete wave propagation equations, while holding medium properties, frequency, horizontal propagation direction, and gridding intervals fixed. Group speed is then obtained by numerically differentiating the phase speed with respect to frequency. The problem is formulated for an explicit stress-free surface positioned at two different levels within the staggered spatial grid. Additionally, an interesting variant involving zero-valued medium properties above the surface is addressed. We refer to the latter as an implicit free surface. Our preliminary conclusion is that an explicit free surface, implemented with O(4) spatial FD
NASA Astrophysics Data System (ADS)
Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.
2016-10-01
Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated. Catalogue identifier: AFAU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 94964 No. of bytes in distributed program, including test data, etc.: 6242986 Distribution format: tar.gz Programming language: Fortran 90, MPI. (Requires an Intel compiler). Computer: Workstations
Palanichamy, Jegathambal; Schüttrumpf, Holger; Köngeter, Jürgen; Becker, Torsten; Palani, Sundarambal
2009-01-01
The migration of the species of chromium and ammonium in groundwater and their effective remediation depend on the various hydro-geological characteristics of the system. The computational modeling of the reactive transport problems is one of the most preferred tools for field engineers in groundwater studies to make decision in pollution abatement. The analytical models are less modular in nature with low computational demand where the modification is difficult during the formulation of different reactive systems. Numerical models provide more detailed information with high computational demand. Coupling of linear partial differential Equations (PDE) for the transport step with a non-linear system of ordinary differential equations (ODE) for the reactive step is the usual mode of solving a kinetically controlled reactive transport equation. This assumption is not appropriate for a system with low concentration of species such as chromium. Such reaction systems can be simulated using a stochastic algorithm. In this paper, a finite difference scheme coupled with a stochastic algorithm for the simulation of the transport of ammonium and chromium in subsurface media has been detailed.
NASA Astrophysics Data System (ADS)
Press, William H.; Teukolsky, Saul A.; Vettering, William T.; Flannery, Brian P.
2003-05-01
The two Numerical Recipes books are marvellous. The principal book, The Art of Scientific Computing, contains program listings for almost every conceivable requirement, and it also contains a well written discussion of the algorithms and the numerical methods involved. The Example Book provides a complete driving program, with helpful notes, for nearly all the routines in the principal book. The first edition of Numerical Recipes: The Art of Scientific Computing was published in 1986 in two versions, one with programs in Fortran, the other with programs in Pascal. There were subsequent versions with programs in BASIC and in C. The second, enlarged edition was published in 1992, again in two versions, one with programs in Fortran (NR(F)), the other with programs in C (NR(C)). In 1996 the authors produced Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing as a supplement, called Volume 2, with the original (Fortran) version referred to as Volume 1. Numerical Recipes in C++ (NR(C++)) is another version of the 1992 edition. The numerical recipes are also available on a CD ROM: if you want to use any of the recipes, I would strongly advise you to buy the CD ROM. The CD ROM contains the programs in all the languages. When the first edition was published I bought it, and have also bought copies of the other editions as they have appeared. Anyone involved in scientific computing ought to have a copy of at least one version of Numerical Recipes, and there also ought to be copies in every library. If you already have NR(F), should you buy the NR(C++) and, if not, which version should you buy? In the preface to Volume 2 of NR(F), the authors say 'C and C++ programmers have not been far from our minds as we have written this volume, and we think that you will find that time spent in absorbing its principal lessons will be amply repaid in the future as C and C++ eventually develop standard parallel extensions'. In the preface and introduction to NR
Godfrey, Brendan B.; Vay, Jean-Luc
2013-09-01
Rapidly growing numerical instabilities routinely occur in multidimensional particle-in-cell computer simulations of plasma-based particle accelerators, astrophysical phenomena, and relativistic charged particle beams. Reducing instability growth to acceptable levels has necessitated higher resolution grids, high-order field solvers, current filtering, etc. except for certain ratios of the time step to the axial cell size, for which numerical growth rates and saturation levels are reduced substantially. This paper derives and solves the cold beam dispersion relation for numerical instabilities in multidimensional, relativistic, electromagnetic particle-in-cell programs employing either the standard or the Cole–Karkkainnen finite difference field solver on a staggered mesh and the common Esirkepov current-gathering algorithm. Good overall agreement is achieved with previously reported results of the WARP code. In particular, the existence of select time steps for which instabilities are minimized is explained. Additionally, an alternative field interpolation algorithm is proposed for which instabilities are almost completely eliminated for a particular time step in ultra-relativistic simulations.
Obuchowski, Nancy A; Barnhart, Huiman X; Buckler, Andrew J; Pennello, Gene; Wang, Xiao-Feng; Kalpathy-Cramer, Jayashree; Kim, Hyun J Grace; Reeves, Anthony P
2015-02-01
Quantitative imaging biomarkers are being used increasingly in medicine to diagnose and monitor patients' disease. The computer algorithms that measure quantitative imaging biomarkers have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms' bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms' performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for quantitative imaging biomarker studies.
Design and Implementation of Numerical Linear Algebra Algorithms on Fixed Point DSPs
NASA Astrophysics Data System (ADS)
Nikolić, Zoran; Nguyen, Ha Thai; Frantz, Gene
2007-12-01
Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs) to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.
A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization.
Zhu, Binglian; Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long
2016-01-01
This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424
A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization
Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long
2016-01-01
This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424
Burko, Lior M.; Baumgarte, Thomas W.; Beetle, Christopher
2006-01-15
Beetle and Burko recently introduced a background-independent scalar curvature invariant for general relativity that carries information about the gravitational radiation in generic spacetimes, in cases where such radiation is incontrovertibly defined. In this paper we adopt a formalism that only uses spatial data as they are used in numerical relativity and compute the Beetle-Burko radiation scalar for a number of analytical examples, specifically linearized Einstein-Rosen cylindrical waves, linearized quadrupole waves, the Kerr spacetime, Bowen-York initial data, and the Kasner spacetime. These examples illustrate how the Beetle-Burko radiation scalar can be used to examine the gravitational wave content of numerically generated spacetimes, and how it may provide a useful diagnostic for initial data sets.
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-01-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
NASA Technical Reports Server (NTRS)
Gunzburger, M. D.; Nicolaides, R. A.
1986-01-01
Substructuring methods are in common use in mechanics problems where typically the associated linear systems of algebraic equations are positive definite. Here these methods are extended to problems which lead to nonpositive definite, nonsymmetric matrices. The extension is based on an algorithm which carries out the block Gauss elimination procedure without the need for interchanges even when a pivot matrix is singular. Examples are provided wherein the method is used in connection with finite element solutions of the stationary Stokes equations and the Helmholtz equation, and dual methods for second-order elliptic equations.
New Concepts in Breast Cancer Emerge from Analyzing Clinical Data Using Numerical Algorithms
Retsky, Michael
2009-01-01
A small international group has recently challenged fundamental concepts in breast cancer. As a guiding principle in therapy, it has long been assumed that breast cancer growth is continuous. However, this group suggests tumor growth commonly includes extended periods of quasi-stable dormancy. Furthermore, surgery to remove the primary tumor often awakens distant dormant micrometastases. Accordingly, over half of all relapses in breast cancer are accelerated in this manner. This paper describes how a numerical algorithm was used to come to these conclusions. Based on these findings, a dormancy preservation therapy is proposed. PMID:19440287
NASA Technical Reports Server (NTRS)
Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.
1990-01-01
Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.
A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries
Nonaka, A; Miller, G H; Marshall, T; Liepmann, D; Gulati, S; Trebotich, D; Colella, P
2003-12-15
We present a numerical algorithm to simulate non-Newtonian flow in complex microdevice components. The model consists of continuum viscoelastic incompressible flow in irregular microscale geometries. Our numerical approach is the projection method of Bell, Colella and Glaz (BCG) to impose the incompressibility constraint coupled with the polymeric stress splitting discretization of Trebotich, Colella and Miller (TCM). In this approach we exploit the hyperbolic structure of the equations of motion to achieve higher resolution in the presence of strong gradients and to gain an order of magnitude in the timestep. We also extend BCG and TCM to an embedded boundary method to treat irregular domain geometries which exist in microdevices. Our method allows for particle representation in a continuum fluid. We present preliminary results for incompressible viscous flow with comparison to flow of DNA and simulants in microchannels and other components used in chem/bio microdevices.
Shia, R L; Ha, Y L; Wen, J S; Yung, Y L
1990-05-20
Extensive testing of the advective scheme, proposed by Prather (1986), has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. We generalize the original scheme to include higher-order moments. In addition, we show how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.
NASA Astrophysics Data System (ADS)
García, Hermes A.; Guerrero-Bolaño, Francisco J.; Obregón-Neira, Nelson
2010-05-01
Due to both mathematical tractability and efficiency on computational resources, it is very common to find in the realm of numerical modeling in hydro-engineering that regular linearization techniques have been applied to nonlinear partial differential equations properly obtained in environmental flow studies. Sometimes this simplification is also made along with omission of nonlinear terms involved in such equations which in turn diminishes the performance of any implemented approach. This is the case for example, for contaminant transport modeling in streams. Nowadays, a traditional and one of the most common used water quality model such as QUAL2k, preserves its original algorithm, which omits nonlinear terms through linearization techniques, in spite of the continuous algorithmic development and computer power enhancement. For that reason, the main objective of this research was to generate a flexible tool for non-linear water quality modeling. The solution implemented here was based on two genetic algorithms, used in a nested way in order to find two different types of solutions sets: the first set is composed by the concentrations of the physical-chemical variables used in the modeling approach (16 variables), which satisfies the non-linear equation system. The second set, is the typical solution of the inverse problem, the parameters and constants values for the model when it is applied to a particular stream. From a total of sixteen (16) variables, thirteen (13) was modeled by using non-linear coupled equation systems and three (3) was modeled in an independent way. The model used here had a requirement of fifty (50) parameters. The nested genetic algorithm used for the numerical solution of a non-linear equation system proved to serve as a flexible tool to handle with the intrinsic non-linearity that emerges from the interactions occurring between multiple variables involved in water quality studies. However because there is a strong data limitation in
Mathematical simulation of soil vapor extraction systems: Model development and numerical examples
NASA Astrophysics Data System (ADS)
Rathfelder, Klaus; Yeh, William W.-G.; Mackay, Douglas
1991-12-01
This paper describes the development of a numerical model for prediction of soil vapor extraction processes. The major emphasis is placed on field-scale predictions with the objective to advance development of planning tools for design and operation of venting systems. The numerical model solves two-dimensional flow and transport equations for general n-component contaminant mixtures. Flow is limited to the gas phase and local equilibrium partitioning is assumed in tracking contaminants in the immiscible fluid, water, gas, and solid phase. Model predictions compared favorably with analytical solutions and multicomponent column venting experiments. Sensitivity analysis indicates equilibrium phase partitioning is a good assumption in modeling organic liquid volatilization occurring in field venting operations. Mass transfer rates in volatilization from the water phase and contaminant desorption are potentially rate limiting. Simulations of hypothetical field-scale problems show efficiency of venting operations is most sensitive to vapor pressure and the magnitude and distribution of soil permeability.
Localized vortices in a nonlinear shallow water model: examples and numerical experiments
NASA Astrophysics Data System (ADS)
Beisel, S. A.; Tolchennikov, A. A.
2016-06-01
Exact solutions of the system of nonlinear shallow water equations on paraboloid are constructed by the method of group analysis. These solutions describe fast wave motion of the fluid layer and slow evolution of symmetric localized vortices. Explicit formulae are obtained for asymptotic solution related to the linear shallow water approximation. Numerical methods are used by the modeling the trajectory of the vortex center in the case of asymmetric vortices.
Pneumatic pulsator design as an example of numerical simulations in engineering applications
NASA Astrophysics Data System (ADS)
Wołosz, Krzysztof; Wernik, Jacek
2012-03-01
The paper presents the part of the investigation that has been carried out in order to develop the pneumatic pulsator which is to be employed as an unblocking device at lose material silo outlets. The part of numerical simulation is reported. The fluid dynamics issues have been outlined which are present during supersonic airflow thought the head of the pulsator. These issues describe the pneumatic impact phenomenon onto the loose material bed present in the silo to which walls the pulsator is assembled. The investigation presented in the paper are industrial applicable and the result is the working prototype of the industrial pneumatic pulsator. The numerical simulation has led to change the piston shape which is moving inside the head of the pulsator, and therefore, to reduce the pressure losses during the airflow. A stress analysis of the pulsator controller body has been carried out while the numerical simulation investigation part of the whole project. The analysis has made possible the change of the controller body material from cast iron to aluminium alloy.
Cloud classification from satellite data using a fuzzy sets algorithm: A polar example
NASA Technical Reports Server (NTRS)
Key, J. R.; Maslanik, J. A.; Barry, R. G.
1988-01-01
Where spatial boundaries between phenomena are diffuse, classification methods which construct mutually exclusive clusters seem inappropriate. The Fuzzy c-means (FCM) algorithm assigns each observation to all clusters, with membership values as a function of distance to the cluster center. The FCM algorithm is applied to AVHRR data for the purpose of classifying polar clouds and surfaces. Careful analysis of the fuzzy sets can provide information on which spectral channels are best suited to the classification of particular features, and can help determine likely areas of misclassification. General agreement in the resulting classes and cloud fraction was found between the FCM algorithm, a manual classification, and an unsupervised maximum likelihood classifier.
Cloud classification from satellite data using a fuzzy sets algorithm - A polar example
NASA Technical Reports Server (NTRS)
Key, J. R.; Maslanik, J. A.; Barry, R. G.
1989-01-01
Where spatial boundaries between phenomena are diffuse, classification methods which construct mutually exclusive clusters seem inappropriate. The Fuzzy c-means (FCM) algorithm assigns each observation to all clusters, with membership values as a function of distance to the cluster center. The FCM algorithm is applied to AVHRR data for the purpose of classifying polar clouds and surfaces. Careful analysis of the fuzzy sets can provide information on which spectral channels are best suited to the classification of particular features, and can help determine like areas of misclassification. General agreement in the resulting classes and cloud fraction was found between the FCM algorithm, a manual classification, and an unsupervised maximum likelihood classifier.
NASA Astrophysics Data System (ADS)
Dong, Suchuan
2015-11-01
This talk focuses on simulating the motion of a mixture of N (N>=2) immiscible incompressible fluids with given densities, dynamic viscosities and pairwise surface tensions. We present an N-phase formulation within the phase field framework that is thermodynamically consistent, in the sense that the formulation satisfies the conservations of mass/momentum, the second law of thermodynamics and Galilean invariance. We also present an efficient algorithm for numerically simulating the N-phase system. The algorithm has overcome the issues caused by the variable coefficient matrices associated with the variable mixture density/viscosity and the couplings among the (N-1) phase field variables and the flow variables. We compare simulation results with the Langmuir-de Gennes theory to demonstrate that the presented method produces physically accurate results for multiple fluid phases. Numerical experiments will be presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts to demonstrate the capabilities of the method for studying the interactions among multiple types of fluid interfaces. Support from NSF and ONR is gratefully acknowledged.
Recent examples of mesoscale numerical forecasts of severe weather events along the east coast
NASA Technical Reports Server (NTRS)
Kocin, P. J.; Uccellini, L. W.; Zack, J. W.; Kaplan, M. L.
1984-01-01
Mesoscale numerical forecasts utilizing the Mesoscale Atmospheric Simulation System (MASS) are documented for two East Coast severe weather events. The two events are the thunderstorm and heavy snow bursts in the Washington, D.C. - Baltimore, MD region on 8 March 1984 and the devastating tornado outbreak across North and South Carolina on 28 March 1984. The forecasts are presented to demonstrate the ability of the model to simulate dynamical interactions and diabatic processes and to note some of the problems encountered when using mesoscale models for day-to-day forecasting.
On the complexity of classical and quantum algorithms for numerical problems in quantum mechanics
NASA Astrophysics Data System (ADS)
Bessen, Arvid J.
Our understanding of complex quantum mechanical processes is limited by our inability to solve the equations that govern them except for simple cases. Numerical simulation of quantum systems appears to be our best option to understand, design and improve quantum systems. It turns out, however, that computational problems in quantum mechanics are notoriously difficult to treat numerically. The computational time that is required often scales exponentially with the size of the problem. One of the most radical approaches for treating quantum problems was proposed by Feytiman in 1982 [46]: he suggested that quantum mechanics itself showed a promising way to simulate quantum physics. This idea, the so called quantum computer, showed its potential convincingly in one important regime with the development of Shor's integer factorization algorithm which improves exponentially on the best known classical algorithm. In this thesis we explore six different computational problems from quantum mechanics, study their computational complexity and try to find ways to remedy them. In the first problem we investigate the reasons behind the improved performance of Shor's and similar algorithms. We show that the key quantum part in Shor's algorithm, the quantum phase estimation algorithm, achieves its good performance through the use of power queries and we give lower bounds for all phase estimation algorithms that use power queries that match the known upper bounds. Our research indicates that problems that allow the use of power queries will achieve similar exponential improvements over classical algorithms. We then apply our lower bound technique for power queries to the Sturm-Liouville eigenvalue problem and show matching lower bounds to the upper bounds of Papageorgiou and Wozniakowski [85]. It seems to be very difficult, though, to find nontrivial instances of the Sturm-Lionville problem for which power queries can be simulated efficiently. A quantum computer differs from a
González, J M; Whitman, W B; Hodson, R E; Moran, M A
1996-01-01
Culturable bacteria that were numerically important members of a marine enrichment community were identified and characterized phylogenetically. Selective and nonselective isolation methods were used to obtain 133 culturable bacterial isolates from model marine communities enriched with the high-molecular-weight (lignin-rich) fraction of pulp mill effluent. The culture collection was screened against community DNA from the lignin enrichments by whole-genome hybridization methods, and three marine bacterial isolates were identified as being numerically important in the communities. One isolate was in the alpha-subclass of Proteobacteria, and the other two were in the gamma-subclass of Proteobacteria. Isolate-specific 16S rRNA oligonucleotide probes designed to precisely quantify the isolates in the lignin enrichment communities indicated contributions ranging from 2 to 32% of enrichment DNA, values nearly identical to those originally obtained by the simpler whole-genome hybridization method. Two 16S rRNA sequences closely related to that of one of the isolates, although not identical, were amplified via PCR from the seawater sample originally used to inoculate the enrichment medium. Partial sequences of 14 other isolates revealed significant phylogenetic diversity and unusual sequences among the culturable lignin enrichment bacteria, with the Proteobacteria, Cytophaga-Flavobacterium, and gram-positive groups represented. PMID:8953714
Brown, Benjamin; Williams, Richard; Sperrin, Matthew; Frank, Timothy; Ainsworth, John; Buchan, Iain
2014-01-01
Despite widespread use of clinical guidelines, actual care often falls short of ideal standards. Electronic health records (EHR) can be analyzed to provide information on how to improve care, but this is seldom done in sufficient detail to guide specific action. We developed an algorithm to provide practical, actionable information for care quality improvement using blood pressure (BP) management in chronic kidney disease (CKD) as an exemplar. We used UK clinical guidelines and EHR data from 440 patients in Salford (UK) to develop the algorithm. We then applied it to 532,409 individual patient records, identifying 11,097 CKD patients, 3,766 (34%) of which showed room for improvement in their care: either through medication optimization or better BP monitoring. Manual record reviews to evaluate accuracy indicated a positive-predictive value of 90%. Such algorithms could help improve the management of chronic conditions by providing the missing link between clinical audit and decision support. PMID:25954337
Brown, Benjamin; Williams, Richard; Sperrin, Matthew; Frank, Timothy; Ainsworth, John; Buchan, Iain
2014-01-01
Despite widespread use of clinical guidelines, actual care often falls short of ideal standards. Electronic health records (EHR) can be analyzed to provide information on how to improve care, but this is seldom done in sufficient detail to guide specific action. We developed an algorithm to provide practical, actionable information for care quality improvement using blood pressure (BP) management in chronic kidney disease (CKD) as an exemplar. We used UK clinical guidelines and EHR data from 440 patients in Salford (UK) to develop the algorithm. We then applied it to 532,409 individual patient records, identifying 11,097 CKD patients, 3,766 (34%) of which showed room for improvement in their care: either through medication optimization or better BP monitoring. Manual record reviews to evaluate accuracy indicated a positive-predictive value of 90%. Such algorithms could help improve the management of chronic conditions by providing the missing link between clinical audit and decision support.
Studies of numerical algorithms for gyrokinetics and the effects of shaping on plasma turbulence
NASA Astrophysics Data System (ADS)
Belli, Emily Ann
Advanced numerical algorithms for gyrokinetic simulations are explored for more effective studies of plasma turbulent transport. The gyrokinetic equations describe the dynamics of particles in 5-dimensional phase space, averaging over the fast gyromotion, and provide a foundation for studying plasma microturbulence in fusion devices and in astrophysical plasmas. Several algorithms for Eulerian/continuum gyrokinetic solvers are compared. An iterative implicit scheme based on numerical approximations of the plasma response is developed. This method reduces the long time needed to set-up implicit arrays, yet still has larger time step advantages similar to a fully implicit method. Various model preconditioners and iteration schemes, including Krylov-based solvers, are explored. An Alternating Direction Implicit algorithm is also studied and is surprisingly found to yield a severe stability restriction on the time step. Overall, an iterative Krylov algorithm might be the best approach for extensions of core tokamak gyrokinetic simulations to edge kinetic formulations and may be particularly useful for studies of large-scale ExB shear effects. The effects of flux surface shape on the gyrokinetic stability and transport of tokamak plasmas are studied using the nonlinear GS2 gyrokinetic code with analytic equilibria based on interpolations of representative JET-like shapes. High shaping is found to be a stabilizing influence on both the linear ITG instability and nonlinear ITG turbulence. A scaling of the heat flux with elongation of chi ˜ kappa-1.5 or kappa-2 (depending on the triangularity) is observed, which is consistent with previous gyrofluid simulations. Thus, the GS2 turbulence simulations are explaining a significant fraction, but not all, of the empirical elongation scaling. The remainder of the scaling may come from (1) the edge boundary conditions for core turbulence, and (2) the larger Dimits nonlinear critical temperature gradient shift due to the
Numerical modelling of agricultural products on the example of bean and yellow lupine seeds
NASA Astrophysics Data System (ADS)
Anders, Andrzej; Kaliniewicz, Zdzisław; Markowski, Piotr
2015-10-01
Numerical models of bean seeds cv. Złota Saxa and yellow lupine seeds cv. Juno were generated with the use of a 3D scanner, the geometric parameters of seeds were determined based on the models developed, and compared with the results of digital image analysis and micrometer measurements. Measurements of seed length, width and thickness performed with the use of a micrometer, 3D scanner and digital image analysis produced similar results that did not differ significantly at α = 0.05. The micrometer delivered the simplest and fastest measurements. The mean surface area of bean seeds cv. Złota Saxa and yellow lupine seeds cv. Juno, calculated with the use of mathematical formulas based on the results of micrometer measurements and digital image analysis, differed significantly from the mean surface area determined with a 3D scanner. No significant differences in seed volume were observed when this parameter was measured with a 3D scanner and determined with the use of mathematical formulas based on the results of digital image analysis and micrometer measurements. The only differences were noted when the volume of yellow lupine seeds cv. Juno was measured in a 25 ml liquid pycnometer.
A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests
NASA Astrophysics Data System (ADS)
Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars
2015-09-01
The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (< 2 Kyr) makes this type of model computationally intensive, so there remains a need to develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate
NASA Astrophysics Data System (ADS)
Li, Yiming
2007-12-01
This symposium is an open forum for discussion on the current trends and future directions of physical modeling, mathematical theory, and numerical algorithm in electrical and electronic engineering. The goal is for computational scientists and engineers, computer scientists, applied mathematicians, physicists, and researchers to present their recent advances and exchange experience. We welcome contributions from researchers of academia and industry. All papers to be presented in this symposium have carefully been reviewed and selected. They include semiconductor devices, circuit theory, statistical signal processing, design optimization, network design, intelligent transportation system, and wireless communication. Welcome to this interdisciplinary symposium in International Conference of Computational Methods in Sciences and Engineering (ICCMSE 2007). Look forward to seeing you in Corfu, Greece!
A numerical algorithm for optimal feedback gains in high dimensional LQR problems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1986-01-01
A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.
NASA Astrophysics Data System (ADS)
Dong, S.
2015-02-01
We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N ⩾ 2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N - 1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N - 1) strongly-coupled phase field equations for general order parameters into 2 (N - 1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir-de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.
Dong, S.
2015-02-15
We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1987-01-01
The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.
Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1987-01-01
The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.
Transient dynamics of terrestrial carbon storage: Mathematical foundation and numeric examples
Luo, Yiqi; Shi, Zheng; Lu, Xingjie; Xia, Jianyang; Liang, Junyi; Wang, Ying; Smith, Matthew J.; Jiang, Lifen; Ahlstrom, Anders; Chen, Benito; et al
2016-09-16
Terrestrial ecosystems absorb roughly 30% of anthropogenic CO2 emissions since preindustrial era, but it is unclear whether this carbon (C) sink will endure into the future. Despite extensive modeling, experimental, and observational studies, what fundamentally determines transient dynamics of terrestrial C storage under climate change is still not very clear. Here we develop a new framework for understanding transient dynamics of terrestrial C storage through mathematical analysis and numerical experiments. Our analysis indicates that the ultimate force driving ecosystem C storage change is the C storage capacity, which is jointly determined by ecosystem C input (e.g., net primary production, NPP)more » and residence time. Since both C input and residence time vary with time, the C storage capacity is time-dependent and acts as a moving attractor that actual C storage chases. The rate of change in C storage is proportional to the C storage potential, the difference between the current storage and the storage capacity. The C storage capacity represents instantaneous responses of the land C cycle to external forcing, whereas the C storage potential represents the internal capability of the land C cycle to influence the C change trajectory in the next time step. The influence happens through redistribution of net C pool changes in a network of pools with different residence times. Furthermore, this and our other studies have demonstrated that one matrix equation can exactly replicate simulations of most land C cycle models (i.e., physical emulators). As a result, simulation outputs of those models can be placed into a three-dimensional (3D) parameter space to measure their differences. The latter can be decomposed into traceable components to track the origins of model uncertainty. Moreover, the emulators make data assimilation computationally feasible so that both C flux- and pool-related datasets can be used to better constrain model predictions of land C
Sharing geoscience algorithms in a Web service-oriented environment (GRASS GIS example)
NASA Astrophysics Data System (ADS)
Li, Xiaoyan; Di, Liping; Han, Weiguo; Zhao, Peisheng; Dadi, Upendra
2010-08-01
Effective use of the large amounts of geospatial data available for geospatial research and applications is needed. In this paper, the emerging SOAP-based Web service technologies have been used to develop a large number of standard compliant, chainable geospatial Web services, using existing geospatial modules in software systems or specific geoscientific algorithms. A prototype for wrapping legacy software modules or geoscientific algorithms into loosely coupled Web services is proposed from an implementation viewpoint. Module development for Web services adheres to the Open GIS Consortium (OGC) geospatial implementation and the World Wide Web consortium (W3C) standards. The Web service interfaces are designed using Web Services Description Language (WSDL) documents. This paper presents how the granularity of an individual existing geospatial service module used by other geoscientific workflows is decided. A treatment of concurrence processes and clustered deployment of Web services is used to overcome multi-user access and network speed limit problems. This endeavor should allow extensive use of geoscientific algorithms and geospatial data.
An Implicit Algorithm for the Numerical Simulation of Shape-Memory Alloys
Becker, R; Stolken, J; Jannetti, C; Bassani, J
2003-10-16
Shape-memory alloys (SMA) have the potential to be used in a variety of interesting applications due to their unique properties of pseudoelasticity and the shape-memory effect. However, in order to design SMA devices efficiently, a physics-based constitutive model is required to accurately simulate the behavior of shape-memory alloys. The scope of this work is to extend the numerical capabilities of the SMA constitutive model developed by Jannetti et. al. (2003), to handle large-scale polycrystalline simulations. The constitutive model is implemented within the finite-element software ABAQUS/Standard using a user defined material subroutine, or UMAT. To improve the efficiency of the numerical simulations, so that polycrystalline specimens of shape-memory alloys can be modeled, a fully implicit algorithm has been implemented to integrate the constitutive equations. Using an implicit integration scheme increases the efficiency of the UMAT over the previously implemented explicit integration method by a factor of more than 100 for single crystal simulations.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Genetic algorithm as a correlation tool - speleothems stable isotope records example
NASA Astrophysics Data System (ADS)
Pawlak, J.; Hercman, H.
2012-04-01
The isotopic composition of oxygen and carbon in cave speleothems is a valuable source of paleoenvironmental information. Oxygen isotopic composition reflects the mean annual temperature at the cave area and the isotopic composition of the infiltering water. Carbon isotopic composition reflects the level of development of soil and vegetation type at the surface. Calcites from cave speleothems can be usually dated by U- series method but U-series method has limitations. One of the most critical is cleanest of analysed calcite. Any detrital admixtures make contamination by initial thorium and dating results are not reliable. In such a situation there is a problem with the time scale estimation of isotopic data. Oxygen stratigraphy of carbonate marine sediments base on the correlation of oxygen isotopic sequence from studied profile with the global standard curve. Similar solution could be applied to the isotopic profiles obtained from cave speleothems. In this case any isotopic record can be correlated with a record which has well defined age. Such correlations can be made on the basis of arbitrary decisions of the researcher however, such procedure may be suffered by subjectively evaluation. Therefore we decided to develop a tool that will enable the correlation of isotopic profiles. Cave speleothems grown with a variable crystallization rate, so similar stretch of time can be represented by the sediments of varying thickness. The process of correlation of isotope curves consists on free shifting of data points ( accordance with the rule of superposition ) belonging to the record with undetermined age, relative to the record with well defined age. Each generated position is evaluated. The best position is accepted as a true position. Such procedure requires the use of an algorithm, which is able to efficient search of large (almost infinite) set of possible positions. Genetic algorithm is a tool that could find the optimal solution in a set of large number of
Learning the behavior of Boolean circuits from examples using cultural algorithms
NASA Astrophysics Data System (ADS)
Reynolds, Robert G.; Sverdlik, William
1993-09-01
In this paper an approach to evolutionary learning based upon principles of cultural evolution is developed. In this dual-inheritance system, there is an evolving population of trait sequences as well as an associated belief space. The belief space is derived from the behavior of individuals and is used to actively constrain the traits acquired in future populations. Shifts in the representation of the belief space and the population are supported. The approach is used to solve several versions of the BOOLE problem; F6, F11, and F20. The results are compared with other approaches and the advantages of a dual inheritance approach using cultural algorithms is discussed.
NASA Astrophysics Data System (ADS)
Filo, Ján; Hundertmark-Zaušková, Anna
2016-10-01
The aim of this paper is to design a rescaling algorithm for the numerical solution to the system of two porous medium equations defined on two different components of the real line, that are connected by the nonlinear contact condition. The algorithm is based on the self-similarity of solutions on different scales and it presents a space-time adaptable method producing more exact numerical solution in the area of the interface between the components, whereas the number of grid points stays fixed.
From bicycle chain ring shape to gear ratio: algorithm and examples.
van Soest, A J
2014-01-01
A simple model of the bicycle drive system with a non-circular front chain ring is proposed and an algorithm is devised for calculation of the corresponding Gear Ratio As a Function Of Crank Angle (GRAFOCA). It is shown that the true effective radius of the chain ring is always the perpendicular distance between the crank axis and the line through the chain segment between the chain ring and the cog. It is illustrated that the true effective radius of the chain ring at any crank angle may differ substantially from the maximum vertical distance between the crank axis and the chain ring circumference that is used as a proxy for the effective chain ring radius in several studies; in particular, the crank angle at which the effective chain ring radius is maximal as predicted from the latter approach may deviate by as much as 0.30 rad from the true value. The algorithm proposed may help in designing chain rings that achieve the desired GRAFOCA. PMID:24200338
NASA Technical Reports Server (NTRS)
Platnick, Steven; Wind, Galina; Zhang, Zhibo; Ackerman, Steven A.; Maddux, Brent
2012-01-01
The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the l.6, 2.1, and 3.7 m spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "notclear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud'edges as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the ID cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.
NASA Astrophysics Data System (ADS)
Platnick, S.; Wind, G.; Zhang, Z.; Ackerman, S. A.; Maddux, B. C.
2012-12-01
The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the 1.6, 2.1, and 3.7 μm spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "not-clear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud edges (defined by immediate adjacency to "clear" MOD/MYD35 pixels) as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the 1D cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.
Biphasic indentation of articular cartilage--II. A numerical algorithm and an experimental study.
Mow, V C; Gibbs, M C; Lai, W M; Zhu, W B; Athanasiou, K A
1989-01-01
Part I (Mak et al., 1987, J. Biomechanics 20, 703-714) presented the theoretical solutions for the biphasic indentation of articular cartilage under creep and stress-relaxation conditions. In this study, using the creep solution, we developed an efficient numerical algorithm to compute all three material coefficients of cartilage in situ on the joint surface from the indentation creep experiment. With this method we determined the average values of the aggregate modulus. Poisson's ratio and permeability for young bovine femoral condylar cartilage in situ to be HA = 0.90 MPa, vs = 0.39 and k = 0.44 x 10(-15) m4/Ns respectively, and those for patellar groove cartilage to be HA = 0.47 MPa, vs = 0.24, k = 1.42 x 10(-15) m4/Ns. One surprising finding from this study is that the in situ Poisson's ratio of cartilage (0.13-0.45) may be much less than those determined from measurements performed on excised osteochondral plugs (0.40-0.49) reported in the literature. We also found the permeability of patellar groove cartilage to be several times higher than femoral condyle cartilage. These findings may have important implications on understanding the functional behavior of cartilage in situ and on methods used to determine the elastic moduli of cartilage using the indentation experiments.
NASA Technical Reports Server (NTRS)
Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.
1988-01-01
The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.
ERIC Educational Resources Information Center
Gonzalez-Vega, Laureano
1999-01-01
Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)
2010-01-01
Background Ambulance response time is a crucial factor in patient survival. The number of emergency cases (EMS cases) requiring an ambulance is increasing due to changes in population demographics. This is decreasing ambulance response times to the emergency scene. This paper predicts EMS cases for 5-year intervals from 2020, to 2050 by correlating current EMS cases with demographic factors at the level of the census area and predicted population changes. It then applies a modified grouping genetic algorithm to compare current and future optimal locations and numbers of ambulances. Sets of potential locations were evaluated in terms of the (current and predicted) EMS case distances to those locations. Results Future EMS demands were predicted to increase by 2030 using the model (R2 = 0.71). The optimal locations of ambulances based on future EMS cases were compared with current locations and with optimal locations modelled on current EMS case data. Optimising the location of ambulance stations locations reduced the average response times by 57 seconds. Current and predicted future EMS demand at modelled locations were calculated and compared. Conclusions The reallocation of ambulances to optimal locations improved response times and could contribute to higher survival rates from life-threatening medical events. Modelling EMS case 'demand' over census areas allows the data to be correlated to population characteristics and optimal 'supply' locations to be identified. Comparing current and future optimal scenarios allows more nuanced planning decisions to be made. This is a generic methodology that could be used to provide evidence in support of public health planning and decision making. PMID:20109172
Hard Data Analytics Problems Make for Better Data Analysis Algorithms: Bioinformatics as an Example
Widera, Paweł; Lazzarini, Nicola; Krasnogor, Natalio
2014-01-01
Abstract Data mining and knowledge discovery techniques have greatly progressed in the last decade. They are now able to handle larger and larger datasets, process heterogeneous information, integrate complex metadata, and extract and visualize new knowledge. Often these advances were driven by new challenges arising from real-world domains, with biology and biotechnology a prime source of diverse and hard (e.g., high volume, high throughput, high variety, and high noise) data analytics problems. The aim of this article is to show the broad spectrum of data mining tasks and challenges present in biological data, and how these challenges have driven us over the years to design new data mining and knowledge discovery procedures for biodata. This is illustrated with the help of two kinds of case studies. The first kind is focused on the field of protein structure prediction, where we have contributed in several areas: by designing, through regression, functions that can distinguish between good and bad models of a protein's predicted structure; by creating new measures to characterize aspects of a protein's structure associated with individual positions in a protein's sequence, measures containing information that might be useful for protein structure prediction; and by creating accurate estimators of these structural aspects. The second kind of case study is focused on omics data analytics, a class of biological data characterized for having extremely high dimensionalities. Our methods were able not only to generate very accurate classification models, but also to discover new biological knowledge that was later ratified by experimentalists. Finally, we describe several strategies to tightly integrate knowledge extraction and data mining in order to create a new class of biodata mining algorithms that can natively embrace the complexity of biological data, efficiently generate accurate information in the form of classification/regression models, and extract valuable
Hard Data Analytics Problems Make for Better Data Analysis Algorithms: Bioinformatics as an Example.
Bacardit, Jaume; Widera, Paweł; Lazzarini, Nicola; Krasnogor, Natalio
2014-09-01
Data mining and knowledge discovery techniques have greatly progressed in the last decade. They are now able to handle larger and larger datasets, process heterogeneous information, integrate complex metadata, and extract and visualize new knowledge. Often these advances were driven by new challenges arising from real-world domains, with biology and biotechnology a prime source of diverse and hard (e.g., high volume, high throughput, high variety, and high noise) data analytics problems. The aim of this article is to show the broad spectrum of data mining tasks and challenges present in biological data, and how these challenges have driven us over the years to design new data mining and knowledge discovery procedures for biodata. This is illustrated with the help of two kinds of case studies. The first kind is focused on the field of protein structure prediction, where we have contributed in several areas: by designing, through regression, functions that can distinguish between good and bad models of a protein's predicted structure; by creating new measures to characterize aspects of a protein's structure associated with individual positions in a protein's sequence, measures containing information that might be useful for protein structure prediction; and by creating accurate estimators of these structural aspects. The second kind of case study is focused on omics data analytics, a class of biological data characterized for having extremely high dimensionalities. Our methods were able not only to generate very accurate classification models, but also to discover new biological knowledge that was later ratified by experimentalists. Finally, we describe several strategies to tightly integrate knowledge extraction and data mining in order to create a new class of biodata mining algorithms that can natively embrace the complexity of biological data, efficiently generate accurate information in the form of classification/regression models, and extract valuable new
NASA Astrophysics Data System (ADS)
José Vicente, Pérez-Peña; Alicia, Jiménez-Gutiérrez; José Miguel, Azañón; Jorge, Delgado; Guillermo, Booth-Rea
2013-04-01
Studies of the distribution of the seismicity are very useful in order to recognize active areas, imagine fault geometries, and to relate earthquake activity to particular tectonic structures. The identification of straight-linear earthquake-epicentres alignments can reflect underlying active tectonic structures as faults. Nevertheless, these point-alignments become complicated to detect with diffuse seismic patterns in areas of low to moderate seismicity. In such cases, it is necessary to apply specific methods to detect and analyze preferential earthquakes alignments. The Hough Transform (HT) is a method that has widely used to detect lines in digital images. Despite of this technique was initially developed to work with pixels from digital images, a generalized algorithm based in the HT could be used to detect specific alignments in a disperse point distribution such as earthquake events. This method focuses in to reduce the number of possible lines by analyzing only those with mathematical significance. In this work we presented a GIS integrated methodology to apply a generalized HT to point distributions. In order to test the algorithm, we presented examples from the Betic Cordillera (SE of Spain), where the seismicity is low to moderate (< 5Mb) and geographically disperse
NASA Technical Reports Server (NTRS)
Powell, Richard W.
1998-01-01
This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.
NASA Astrophysics Data System (ADS)
Tang, Yu-Hang; Karniadakis, George Em
2014-11-01
We present a scalable dissipative particle dynamics simulation code, fully implemented on the Graphics Processing Units (GPUs) using a hybrid CUDA/MPI programming model, which achieves 10-30 times speedup on a single GPU over 16 CPU cores and almost linear weak scaling across a thousand nodes. A unified framework is developed within which the efficient generation of the neighbor list and maintaining particle data locality are addressed. Our algorithm generates strictly ordered neighbor lists in parallel, while the construction is deterministic and makes no use of atomic operations or sorting. Such neighbor list leads to optimal data loading efficiency when combined with a two-level particle reordering scheme. A faster in situ generation scheme for Gaussian random numbers is proposed using precomputed binary signatures. We designed custom transcendental functions that are fast and accurate for evaluating the pairwise interaction. The correctness and accuracy of the code is verified through a set of test cases simulating Poiseuille flow and spontaneous vesicle formation. Computer benchmarks demonstrate the speedup of our implementation over the CPU implementation as well as strong and weak scalability. A large-scale simulation of spontaneous vesicle formation consisting of 128 million particles was conducted to further illustrate the practicality of our code in real-world applications. Catalogue identifier: AETN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 602 716 No. of bytes in distributed program, including test data, etc.: 26 489 166 Distribution format: tar.gz Programming language: C/C++, CUDA C/C++, MPI. Computer: Any computers having nVidia GPGPUs with compute capability 3.0. Operating system: Linux. Has the code been
NASA Astrophysics Data System (ADS)
Townley, Lloyd R.; Wilson, John L.
1985-12-01
Finite difference and finite element methods are frequently used to study aquifer flow; however, additional analysis is required when model parameters, and hence predicted heads are uncertain. Computational algorithms are presented for steady and transient models in which aquifer storage coefficients, transmissivities, distributed inputs, and boundary values may all be simultaneously uncertain. Innovative aspects of these algorithms include a new form of generalized boundary condition; a concise discrete derivation of the adjoint problem for transient models with variable time steps; an efficient technique for calculating the approximate second derivative during line searches in weighted least squares estimation; and a new efficient first-order second-moment algorithm for calculating the covariance of predicted heads due to a large number of uncertain parameter values. The techniques are presented in matrix form, and their efficiency depends on the structure of sparse matrices which occur repeatedly throughout the calculations. Details of matrix structures are provided for a two-dimensional linear triangular finite element model.
Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm
NASA Technical Reports Server (NTRS)
Kato, Hiromasa; Tannehill, John C.; Mehta, Unmeel B.
2003-01-01
A new parabolized Navier-Stokes (PNS) algorithm has been developed to efficiently compute magnetohydrodynamic (MHD) flows in the low magnetic Reynolds number regime. In this regime, the electrical conductivity is low and the induced magnetic field is negligible compared to the applied magnetic field. The MHD effects are modeled by introducing source terms into the PNS equation which can then be solved in a very efficient manner. To account for upstream (elliptic) effects, the flowfields are computed using multiple streamwise sweeps with an iterated PNS algorithm. Turbulence has been included by modifying the Baldwin-Lomax turbulence model to account for MHD effects. The new algorithm has been used to compute both laminar and turbulent, supersonic, MHD flows over flat plates and supersonic viscous flows in a rectangular MHD accelerator. The present results are in excellent agreement with previous complete Navier-Stokes calculations.
NASA Astrophysics Data System (ADS)
Bor, E.; Turduev, M.; Kurt, H.
2016-08-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.
Bor, E; Turduev, M; Kurt, H
2016-01-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.
Bor, E; Turduev, M; Kurt, H
2016-01-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060
Bor, E.; Turduev, M.; Kurt, H.
2016-01-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060
Artificial algae algorithm with multi-light source for numerical optimization and applications.
Uymaz, Sait Ali; Tezel, Gulay; Yel, Esra
2015-12-01
Artificial algae algorithm (AAA), which is one of the recently developed bio-inspired optimization algorithms, has been introduced by inspiration from living behaviors of microalgae. In AAA, the modification of the algal colonies, i.e. exploration and exploitation is provided with a helical movement. In this study, AAA was modified by implementing multi-light source movement and artificial algae algorithm with multi-light source (AAAML) version was established. In this new version, we propose the selection of a different light source for each dimension that is modified with the helical movement for stronger balance between exploration and exploitation. These light sources have been selected by tournament method and each light source are different from each other. This gives different solutions in the search space. The best of these three light sources provides orientation to the better region of search space. Furthermore, the diversity in the source space is obtained with the worst light source. In addition, the other light source improves the balance. To indicate the performance of AAA with new proposed operators (AAAML), experiments were performed on two different sets. Firstly, the performance of AAA and AAAML was evaluated on the IEEE-CEC'13 benchmark set. The second set was real-world optimization problems used in the IEEE-CEC'11. To verify the effectiveness and efficiency of the proposed algorithm, the results were compared with other state-of-the-art hybrid and modified algorithms. Experimental results showed that the multi-light source movement (MLS) increases the success of the AAA.
NASA Astrophysics Data System (ADS)
Schroder, Kjell; Olsen, Thomas; Wiener, Richard
2006-11-01
Recursive Proportional Feedback (RPF) is an algorithm for the control of chaotic systems of great utility and ease of use. Control coefficients are determined from pre- control sampling of the system dynamics. We have adapted this method, in the spirit of the Extended Time-Delay Autosynchronization (ETDAS) method to seek minimal change from each previous value. The two methods so derived, Simple Recursive Proportional Feedback (SRPF) and Doubly Recursive Proportional Feedback (DRPF) have been studied in numerical simulations to determine their robustness when system parameters, other than that used for feedback, drift over time. We present evidence of the range over which each algorithm displays robustness against drift. Rollins et al, Phys. Rev. E 47, R780 (1993). Scolar et al, Phys. Rev. E 50, 3245 (1994).
ORDMET: A General Algorithm for Constructing All Numerical Solutions to Ordered Metric Data
ERIC Educational Resources Information Center
McClelland, Gary; Coombs, Clyde H.
1975-01-01
ORDMET is applicable to structures obtained from additive conjoint measurement designs, unfolding theory, general Fechnerian scaling, types of multidimensional scaling, and ordinal multiple regression. A description is obtained of the space containing all possible numerical representations which can satisfy the structure, size, and shape of which…
AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)
A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...
NASA Astrophysics Data System (ADS)
Long, Robert Bryan; Thacker, William Carlisle
1989-06-01
Numerical modeling provides a powerful tool for the study of the dynamics of oceans and atmospheres. However, the relevance of modeling results can only be established by reference to observations of the system being modeled. Typical oceanic observation sets are sparse, asynoptic, of mixed type and limited reliability, generally inadequate in some respects, and redundant and inconsistent in others. An optimal procedure for interfacing such data sets with a numerical model is the so-called adjoint method. This procedure effectively assimilates the observations into a run of the numerical model by finding that solution to the model equations that best fits all observations made within some specified space-time interval. The method requires the construction of the adjoint of the numerical model, a process made practical for models of realistic complexity by the work of Thacker and Long. In the present paper, the first of two parts, we illustrate the application of Thacker and Long's approach by constructing a data-assimilating version of an equatorial ocean model incorporating the adjoint method. The model is subsequently run for 5 years to near-steady-state, and exhibits many of the features known to be characteristic of equatorial oceanic flows. Using the last 54 days of the run as a control, a set of simulated sea-level and subsurface-density observations are collected, then successfully assimilated to demonstrate that the procedure can recover the control run, given a generous amount of data. In part II we conduct a sequence of numerical experiments to explore the ability of more limited sets of observations to fix the state of the modeled ocean; in the process, we examine the potential value of sea-level data obtained via satellite altimetry.
NASA Astrophysics Data System (ADS)
Wang, Jiong; Steinmann, Paul
2016-05-01
This is part II of this series of papers. The aim of the current paper was to solve the governing PDE system derived in part I numerically, such that the procedure of variant reorientation in a magnetic shape memory alloy (MSMA) sample can be simulated. The sample to be considered in this paper has a 3D cuboid shape and is subject to typical magnetic and mechanical loading conditions. To investigate the demagnetization effect on the sample's response, the surrounding space of the sample is taken into account. By considering the different properties of the independent variables, an iterative numerical algorithm is proposed to solve the governing system. The related mathematical formulas and some techniques facilitating the numerical calculations are introduced. Based on the results of numerical simulations, the distributions of some important physical quantities (e.g., magnetization, demagnetization field, and mechanical stress) in the sample can be determined. Furthermore, the properties of configurational force on the twin interfaces are investigated. By virtue of the twin interface movement criteria derived in part I, the whole procedure of magnetic field- or stress-induced variant reorientations in the MSMA sample can be properly simulated.
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.
ERIC Educational Resources Information Center
de Saint Andre, Ph. Durant
1987-01-01
Presents a computer interpretation of the consistencies as well as inadequacies of grammar rules in changing singular forms to plural. A correlative algorithm is discussed for the purpose of enhancing the present logical system of changing singulars to plurals. (TR)
NASA Astrophysics Data System (ADS)
Zhang, Lisha
We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.
Numerical Algorithm Based on Haar-Sinc Collocation Method for Solving the Hyperbolic PDEs
Javadi, H. H. S.; Navidi, H. R.
2014-01-01
The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295
Numerical algorithm based on Haar-Sinc collocation method for solving the hyperbolic PDEs.
Pirkhedri, A; Javadi, H H S; Navidi, H R
2014-01-01
The present study investigates the Haar-Sinc collocation method for the solution of the hyperbolic partial telegraph equations. The advantages of this technique are that not only is the convergence rate of Sinc approximation exponential but the computational speed also is high due to the use of the Haar operational matrices. This technique is used to convert the problem to the solution of linear algebraic equations via expanding the required approximation based on the elements of Sinc functions in space and Haar functions in time with unknown coefficients. To analyze the efficiency, precision, and performance of the proposed method, we presented four examples through which our claim was confirmed. PMID:25485295
NASA Technical Reports Server (NTRS)
Lakshminarayana, B.; Ho, Y.; Basson, A.
1993-01-01
The objective of this research is to simulate steady and unsteady viscous flows, including rotor/stator interaction and tip clearance effects in turbomachinery. The numerical formulation for steady flow developed here includes an efficient grid generation scheme, particularly suited to computational grids for the analysis of turbulent turbomachinery flows and tip clearance flows, and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, and is applicable to both viscous and inviscid flows. The values of these artificial dissipation is optimized to achieve accuracy and convergency in the solution. The numerical model is used to investigate the structure of tip clearance flows in a turbine nozzle. The structure of leakage flow is captured accurately, including blade-to-blade variation of all three velocity components, pitch and yaw angles, losses and blade static pressures in the tip clearance region. The simulation also includes evaluation of such quantities of leakage mass flow, vortex strength, losses, dominant leakage flow regions and the spanwise extent affected by the leakage flow. It is demonstrated, through optimization of grid size and artificial dissipation, that the tip clearance flow field can be captured accurately. The above numerical formulation was modified to incorporate time accurate solutions. An inner loop iteration scheme is used at each time step to account for the non-linear effects. The computation of unsteady flow through a flat plate cascade subjected to a transverse gust reveals that the choice of grid spacing and the amount of artificial dissipation is critical for accurate prediction of unsteady phenomena. The rotor-stator interaction problem is simulated by starting the computation upstream of the stator, and the upstream rotor wake is specified from the experimental data. The results show that the stator potential effects have appreciable influence on the upstream rotor wake
An efficient algorithm for numerical computations of continuous densities of states
NASA Astrophysics Data System (ADS)
Langfeld, K.; Lucini, B.; Pellegrini, R.; Rago, A.
2016-06-01
In Wang-Landau type algorithms, Monte-Carlo updates are performed with respect to the density of states, which is iteratively refined during simulations. The partition function and thermodynamic observables are then obtained by standard integration. In this work, our recently introduced method in this class (the LLR approach) is analysed and further developed. Our approach is a histogram free method particularly suited for systems with continuous degrees of freedom giving rise to a continuum density of states, as it is commonly found in lattice gauge theories and in some statistical mechanics systems. We show that the method possesses an exponential error suppression that allows us to estimate the density of states over several orders of magnitude with nearly constant relative precision. We explain how ergodicity issues can be avoided and how expectation values of arbitrary observables can be obtained within this framework. We then demonstrate the method using compact U(1) lattice gauge theory as a show case. A thorough study of the algorithm parameter dependence of the results is performed and compared with the analytically expected behaviour. We obtain high precision values for the critical coupling for the phase transition and for the peak value of the specific heat for lattice sizes ranging from 8^4 to 20^4. Our results perfectly agree with the reference values reported in the literature, which covers lattice sizes up to 18^4. Robust results for the 20^4 volume are obtained for the first time. This latter investigation, which, due to strong metastabilities developed at the pseudo-critical coupling of the system, so far has been out of reach even on supercomputers with importance sampling approaches, has been performed to high accuracy with modest computational resources. This shows the potential of the method for studies of first order phase transitions. Other situations where the method is expected to be superior to importance sampling techniques are pointed
NASA Astrophysics Data System (ADS)
Angeli, D.; Stalio, E.; Corticelli, M. A.; Barozzi, G. S.
2015-11-01
A parallel algorithm is presented for the Direct Numerical Simulation of buoyancy- induced flows in open or partially confined periodic domains, containing immersed cylindrical bodies of arbitrary cross-section. The governing equations are discretized by means of the Finite Volume method on Cartesian grids. A semi-implicit scheme is employed for the diffusive terms, which are treated implicitly on the periodic plane and explicitly along the homogeneous direction, while all convective terms are explicit, via the second-order Adams-Bashfort scheme. The contemporary solution of velocity and pressure fields is achieved by means of a projection method. The numerical resolution of the set of linear equations resulting from discretization is carried out by means of efficient and highly parallel direct solvers. Verification and validation of the numerical procedure is reported in the paper, for the case of flow around an array of heated cylindrical rods arranged in a square lattice. Grid independence is assessed in laminar flow conditions, and DNS results in turbulent conditions are presented for two different grids and compared to available literature data, thus confirming the favorable qualities of the method.
Fischer, Martin H.; Brugger, Peter
2011-01-01
Spatial–numerical associations (SNAs) are prevalent yet their origin is poorly understood. We first consider the possible prime role of reading habits in shaping SNAs and list three observations that argue against a prominent influence of this role: (1) directional reading habits for numbers may conflict with those for non-numerical symbols, (2) short-term experimental manipulations can overrule the impact of decades of reading experience, (3) SNAs predate the acquisition of reading. As a promising alternative, we discuss behavioral, neuroscientific, and neuropsychological evidence in support of finger counting as the most likely initial determinant of SNAs. Implications of this “manumerical cognition” stance for the distinction between grounded, embodied, and situated cognition are discussed. PMID:22028696
Fischer, Martin H; Brugger, Peter
2011-01-01
Spatial-numerical associations (SNAs) are prevalent yet their origin is poorly understood. We first consider the possible prime role of reading habits in shaping SNAs and list three observations that argue against a prominent influence of this role: (1) directional reading habits for numbers may conflict with those for non-numerical symbols, (2) short-term experimental manipulations can overrule the impact of decades of reading experience, (3) SNAs predate the acquisition of reading. As a promising alternative, we discuss behavioral, neuroscientific, and neuropsychological evidence in support of finger counting as the most likely initial determinant of SNAs. Implications of this "manumerical cognition" stance for the distinction between grounded, embodied, and situated cognition are discussed.
NASA Technical Reports Server (NTRS)
Weir, Kent A.; Wells, Eugene M.
1990-01-01
The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.
NASA Astrophysics Data System (ADS)
Li, Cong; Lei, Jianshe
2014-10-01
In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.
NASA Astrophysics Data System (ADS)
Li, Cong; Lei, Jianshe
2014-09-01
In this paper, we focus on the influences of various parameters in the niching genetic algorithm inversion procedure on the results, such as various objective functions, the number of the models in each subpopulation, and the critical separation radius. The frequency-waveform integration (F-K) method is applied to synthesize three-component waveform data with noise in various epicentral distances and azimuths. Our results show that if we use a zero-th-lag cross-correlation function, then we will obtain the model with a faster convergence and a higher precision than other objective functions. The number of models in each subpopulation has a great influence on the rate of convergence and computation time, suggesting that it should be obtained through tests in practical problems. The critical separation radius should be determined carefully because it directly affects the multi-extreme values in the inversion. We also compare the inverted results from full-band waveform data and surface-wave frequency-band (0.02-0.1 Hz) data, and find that the latter is relatively poorer but still has a higher precision, suggesting that surface-wave frequency-band data can also be used to invert for the crustal structure.
SOLA-DM: A numerical solution algorithm for transient three-dimensional flows
Wilson, T.L.; Nichols, B.D.; Hirt, C.W.; Stein, L.R.
1988-02-01
SOLA-DM is a three-dimensional time-explicit, finite-difference, Eulerian, fluid-dynamics computer code for solving the time-dependent incompressible Navier-Stokes equations. The solution algorithm (SOLA) evolved from the marker-and-cell (MAC) method, and the code is highly vectorized for efficient performance on a Cray computer. The computational domain is discretized by a mesh of parallelepiped cells in either cartesian or cylindrical geometry. The primary hydrodynamic variables for approximating the solution of the momentum equations are cell-face-centered velocity components and cell-centered pressures. Spatial accuracy is selected by the user to be first or second order; the time differencing is first-order accurate. The incompressibility condition results in an elliptic equation for pressure that is solved by a conjugate gradient method. Boundary conditions of five general types may be chosen: free-slip, no-slip, continuative, periodic, and specified pressure. In addition, internal mesh specifications to model obstacles and walls are provided. SOLA-DM also solves the equations for discrete particle dynamics, permitting the transport of marker particles or other solid particles through the fluid to be modeled. 7 refs., 7 figs.
NASA Astrophysics Data System (ADS)
Mouton, S.; Ledoux, Y.; Teissandier, D.; Sébastian, P.
2010-06-01
A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM® and Samcef® softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.
Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.
2010-06-15
A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.
NASA Astrophysics Data System (ADS)
Durisen, R. H.; Cramer, N. L.; Murphy, B. W.; Cuzzi, J. N.; Mullikin, T. L.; Cederbloom, S. E.
1989-07-01
Ballistic transport, defined as the net radial transport of mass and angular momentum due to exchanges of meteoroid hypersonic-impact ejecta by neighboring planetary ring regions on time-scales orders-of-magnitude shorter than the age of the solar system, is presently considered as a problem in mathematical physics. The preliminary results of a numerical scheme for following the combined effects of ballistic transport and viscous diffusion demonstrate that ballistic transport generates structure near sharp edges already present in the ring-mass distribution; the entire ring system ultimately develops an undulatory structure whose length scale is typically of the order of the radial excursion of the impact ejecta.
Wan, Hui; Rasch, Philip J.; Zhang, Kai; Kazil, Jan; Leung, Lai-Yung R.
2013-06-26
The purpose of this paper is to draw attention to the need for appropriate numerical techniques to represent process interactions in climate models. In two versions of the ECHAM-HAM model, different time integration methods are used to solve the sulfuric acid (H2SO4) gas evolution equation, which lead to substantially different results in the H2SO4 gas concentration and the aerosol nucleation rate. Using convergence tests and sensitivity simulations performed with various time stepping schemes, it is confirmed that numerical errors in the second model version are significantly smaller than those in version one. The use of sequential operator splitting in combination with long time step is identified as the main reason for the large systematic biases in the old model. The remaining errors in version two in the nucleation rate, related to the competition between condensation and nucleation, have a clear impact on the simulated concentration of cloud condensation nuclei in the lower troposphere. These errors can be significantly reduced by employing an implicit solver that handles production, condensation and nucleation at the same time. Lessons learned in this work underline the need for more caution when treating multi-time-scale problems involving compensating and competing processes, a common occurrence in current climate models.
NASA Astrophysics Data System (ADS)
Horng, Thin-Lin
The main purpose of this paper is to explore a numerical algorithm for determining the contact stress when a circular crowned roller is compressed between two plates. To start with, the deformation curve on a plate surface will be derived by using the contact mechanical model. Then, the contact stress distribution along the roller which occurs on the plate surface is divided into three parts: from the center of contact to the edge, the edge and apart from the contact line. The first part is calculated by the elastic contact theorem for the contact subjected to nominal stress between non-crowned parts of roller and plates, the second part is obtained by the classical Hertzian contact solution for the contact between crowned parts of roller and plates, and the third part is simulated as exponential decay. In order to overcome the defect of the half space theorem, in which a plate with infinite thickness is assumed initially, a weighting method is introduced to find the contact stress of the plate with finite thickness. Comparisons with various finite element results indicate that the algorithm for estimating the contact stress of a circular crowned roller compressed between two plates derived in this paper can be a reasonably accurate when a heavy displacement load is applied. This is because the contact area is large under a heavy load, and the effect of stress concentration is smaller in comparison with the case under a light load.
Islam, Sk Minhazul; Das, Swagatam; Ghosh, Saurav; Roy, Subhrajit; Suganthan, Ponnuthurai Nagaratnam
2012-04-01
Differential evolution (DE) is one of the most powerful stochastic real parameter optimizers of current interest. In this paper, we propose a new mutation strategy, a fitness-induced parent selection scheme for the binomial crossover of DE, and a simple but effective scheme of adapting two of its most important control parameters with an objective of achieving improved performance. The new mutation operator, which we call DE/current-to-gr_best/1, is a variant of the classical DE/current-to-best/1 scheme. It uses the best of a group (whose size is q% of the population size) of randomly selected solutions from current generation to perturb the parent (target) vector, unlike DE/current-to-best/1 that always picks the best vector of the entire population to perturb the target vector. In our modified framework of recombination, a biased parent selection scheme has been incorporated by letting each mutant undergo the usual binomial crossover with one of the p top-ranked individuals from the current population and not with the target vector with the same index as used in all variants of DE. A DE variant obtained by integrating the proposed mutation, crossover, and parameter adaptation strategies with the classical DE framework (developed in 1995) is compared with two classical and four state-of-the-art adaptive DE variants over 25 standard numerical benchmarks taken from the IEEE Congress on Evolutionary Computation 2005 competition and special session on real parameter optimization. Our comparative study indicates that the proposed schemes improve the performance of DE by a large magnitude such that it becomes capable of enjoying statistical superiority over the state-of-the-art DE variants for a wide variety of test problems. Finally, we experimentally demonstrate that, if one or more of our proposed strategies are integrated with existing powerful DE variants such as jDE and JADE, their performances can also be enhanced.
Mariño-Ramírez, Leonardo; Sheetlin, Sergey L.
2014-01-01
Background Some biological sequences contain subsequences of unusual composition, e.g., some proteins contain DNA binding domains, transmembrane regions, and charged regions; and some DNA sequences contain repeats. Requiring time linear in the length of an input sequence, the Ruzzo-Tompa (RT) Algorithm finds subsequences of unusual composition, using a sequence of scores as input and the corresponding “maximal segments” as output. (Loosely, maximal segments are the contiguous subsequences having greatest total score.) Just as gaps improved the sensitivity of BLAST, in principle gaps could help tune other tools, to improve sensitivity when searching for subsequences of unusual composition. Results Call a graph whose vertices are totally ordered a “totally ordered graph”. In a totally ordered graph, call a path whose vertices are in increasing order an “increasing path”. The input of the RT Algorithm can be generalized to a finite, totally ordered, weighted graph, so the algorithm then locates maximal segments, corresponding to increasing paths of maximal weight. The generalization permits penalized deletion of unfavorable letters from contiguous subsequences, so the generalized Ruzzo-Tompa algorithm can find subsequences with greatest total gapped scores. The search for inexact simple repeats in DNA exemplifies some of the concepts. For some limited types of repeats, RepWords, a repeat-finding tool based on the principled use of the Ruzzo-Tompa algorithm, performed better than a similar extant tool. Conclusions With minimal programming effort, the generalization of the Ruzzo-Tompa algorithm given in this article could improve the performance of many programs for finding biological subsequences of unusual composition. PMID:24989859
Angus, Simon D; Piotrowska, Monika Joanna
2014-01-01
Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means
NASA Astrophysics Data System (ADS)
Audet, Pascal
2016-06-01
The expanding fleet of broad-band ocean-bottom seismograph (OBS) stations is facilitating the study of the structure and seismicity of oceanic plates at regional scales. For continental studies, an important tool to characterize continental crust and mantle structure is the analysis of teleseismic P receiver functions. In the oceans, however, receiver functions potentially suffer from several limiting factors that are unique to ocean sites and plate structures. In this study, we model receiver functions for a variety of oceanic lithospheric structures to investigate the possibilities and limitations of receiver functions using OBS data. Several potentially contaminating effects are examined, including pressure reverberations from the water column for various ocean-floor depths and the effects of a layer of low-velocity marine sediments. These modelling results indicate that receiver functions from OBS data are difficult to interpret in the presence of marine sediments, but shallow-water sites in subduction zone forearcs may be suitable for constraining various crustal elements around the locked megathrust fault. We propose using a complementary approach based on transfer function modelling combined with a grid search approach that bypasses receiver functions altogether and estimates model properties directly from minimally processed waveforms. Using real data examples from the Cascadia Initiative, we show how receiver and transfer functions can be used to infer seismic properties of the oceanic plate in both shallow (Cascadia forearc) and deep (Juan de Fuca Ridge) ocean settings.
ERIC Educational Resources Information Center
Henle, James M.
This pamphlet consists of 17 brief chapters, each containing a discussion of a numeration system and a set of problems on the use of that system. The numeration systems used include Egyptian fractions, ordinary continued fractions and variants of that method, and systems using positive and negative bases. The book is informal and addressed to…
NASA Astrophysics Data System (ADS)
Gasparini, N. M.; Whipple, K. X.; Willenbring, J.; Crosby, B. T.; Brocard, G. Y.
2013-12-01
Numerical landscape evolution models (LEMs) offer us the unique opportunity to watch a landscape evolve under any set of environmental forcings that we can quantify. The possibilities for using LEMs are infinite, but complications arise when trying to model a real landscape. Specifically, numerical models cannot recreate every aspect of a real landscape because exact initial conditions are unknown, there will always be gaps in the known tectonic and climatic history, and the geomorphic transport laws that govern redistribution of mass due to surface processes will always be a simplified representation of the actual process. Yet, even with these constraints, numerical models remain the only tool that offers us the potential to explore a limitless range of evolutionary scenarios, allowing us to, at the very least, identify possible drivers responsible for the morphology of the current landscape, and just as importantly, rule out others. Here we highlight two examples in which we use a numerical model to explore the signature of different forcings on landscape morphology and erosion patterns. In the first landscape, the Northern Bolivian Andes, the relative imprint of rock uplift and precipitation patterns on landscape morphology is widely contested. We use the CHILD LEM to systematically vary climate and tectonics and quantify their fingerprints on channel profiles across a steep mountain front. We find that rock uplift and precipitation patterns in this landscape and others can be teased out by examining channel profiles of variably sized catchments that drain different parts of the topography. In the second landscape, the South Fork Eel River (SFER), northern California, USA, the tectonic history is relatively well known; a wave of rock uplift swept through the watershed from headwaters to outlet, perturbing the landscape and sending a wave of bedrock incision upstream. Nine millennial-scale erosion rates from along the mainstem of the river illustrate a pattern of
A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics
NASA Astrophysics Data System (ADS)
Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.
2015-12-01
This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.
NASA Astrophysics Data System (ADS)
Zhou, Lin
In the first part of the work, we developed coding for large-scale computation to solve 3-dimensional microwave scattering problem. Maxwell integral equations are solved by using MoM with RWG basis functions in conjunction with fast computation algorithms. The cost-effective solutions of parallel and distributed simulation were implemented on a low cost PC cluster, which consists of 32 processors connected to a fast Ethernet switch. More than a million of surface current unknowns were solved at unprecedented speeds. Accurate simulations of emissivities and bistatic coefficients from ocean and soil were achieved. Exponential correlation function and ocean spectrum are implementd for generating soil and ocean surfaces. They have fine scale features with large rms slope. The results were justified by comparison with numerical results from original code, which is based on pulse basis function, and from analytic methods like SPM, and also with experiments. In the second part of the work, fully polarimetric microwave emissions from wind-generated foam-covered ocean surfaces were investigated. The foam is treated as densely packed air bubbles coated with thin seawater coating. The absorption, scattering and extinction coefficients were calculated by Monte Carlo simulations of solutionsof Maxwell equations of a collection of coated particles. The effects of boundary roughness of ocean surfaces were included by using the second-order small perturbation method (SPM) describing the reflection coefficients between foam and ocean. An empirical wave-number spectrum was used to represent the small-scale wind-generated sea surfaces. The theoretical results of four Stokes brightness temperatures with typical parameters of foam in passive remote sensing at 10.8 GHz, 19.0 GHz and 36.5 GHz were illustrated. The azimuth variations of polarimetric brightness temperature were calculated. Emission with various wind speed and foam layer thickness was studied. The results were also compared
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1995-01-01
Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.
NASA Technical Reports Server (NTRS)
Bui, Trong T.; Mankbadi, Reda R.
1995-01-01
Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.
NASA Astrophysics Data System (ADS)
Biamonte, Mason; Idarraga, John
2013-04-01
A classical hybrid alternating-direction implicit difference scheme is used to simulate two-dimensional charge carrier advection-diffusion induced by alpha particles incident upon silicon pixel detectors at room temperature in vacuum. A mapping between the results of the simulation and a projection of the cluster size for each incident alpha is constructed. The error between the simulation and the experimental data diminishes with the increase in the applied voltage for the pixels in the central region of the cluster. Simulated peripheral pixel TOT values do not match the data for any value of applied voltage, suggesting possible modifications to the current algorithm from first principles. Coulomb repulsion between charge carriers is built into the algorithm using the Barnes-Hut tree algorithm. The plasma effect arising from the initial presence of holes in the silicon is incorporated into the simulation. The error between the simulation and the data helps identify physics not accounted for in standard literature simulation techniques.
Projector Method: theory and examples
Dahl, E.D.
1985-01-01
The Projector Method technique for numerically analyzing lattice gauge theories was developed to take advantage of certain simplifying features of gauge theory models. Starting from a very general notion of what the Projector Method is, the techniques are applied to several model problems. After these examples have traced the development of the actual algorithm from the general principles of the Projector Method, a direct comparison between the Projector and the Euclidean Monte Carlo is made, followed by a discussion of the application to Periodic Quantum Electrodynamics in two and three spatial dimensions. Some methods for improving the efficiency of the Projector in various circumstances are outlined. 10 refs., 7 figs. (LEW)
Numerical methods for portfolio selection with bounded constraints
NASA Astrophysics Data System (ADS)
Yin, G.; Jin, Hanqing; Jin, Zhuo
2009-11-01
This work develops an approximation procedure for portfolio selection with bounded constraints. Based on the Markov chain approximation techniques, numerical procedures are constructed for the utility optimization task. Under simple conditions, the convergence of the approximation sequences to the wealth process and the optimal utility function is established. Numerical examples are provided to illustrate the performance of the algorithms.
Lou, X M; Hassebrook, L G; Lhamon, M E; Li, J
1997-01-01
We introduce a new method for determining the number of straight lines, line angles, offsets, widths, and discontinuities in complicated images. In this method, line angles are obtained by searching the peaks of a hybrid discrete Fourier and bilinear transformed line angle spectrum. Numerical advantages and performance are demonstrated.
NASA Astrophysics Data System (ADS)
Gasparini, N. M.; Hobley, D. E. J.; Tucker, G. E.; Istanbulluoglu, E.; Adams, J. M.; Nudurupati, S. S.; Hutton, E. W. H.
2014-12-01
Computational models are important tools that can be used to quantitatively understand the evolution of real landscapes. Commonalities exist among most landscape evolution models, although they are also idiosyncratic, in that they are coded in different languages, require different input values, and are designed to tackle a unique set of questions. These differences can make applying a landscape evolution model challenging, especially for novice programmers. In this study, we compare and contrast two landscape evolution models that are designed to tackle similar questions, but the actual model designs are quite different. The first model, CHILD, is over a decade-old and is relatively well-tested, well-developed and well-used. It is coded in C++, operates on an irregular grid and was designed more with function rather than user-experience in mind. In contrast, the second model, Landlab, is relatively new and was designed to be accessible to a wide range of scientists, including those who have not previously used or developed a numerical model. Landlab is coded in Python, a relatively easy language for the non-proficient programmer, and has the ability to model landscapes described on both regular and irregular grids. We present landscape simulations from both modeling platforms. Our goal is to illustrate best practices for implementing a new process module in a landscape evolution model, and therefore the simulations are applicable regardless of the modeling platform. We contrast differences and highlight similarities between the use of the two models, including setting-up the model and input file for different evolutionary scenarios, computational time, and model output. Whenever possible, we compare model output with analytical solutions and illustrate the effects, or lack thereof, of a uniform vs. non-uniform grid. Our simulations focus on implementing a single process, including detachment-limited or transport-limited fluvial bedrock incision and linear or non
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
2013-01-01
Background The High-Dimensional Propensity Score (hd-PS) algorithm can select and adjust for baseline confounders of treatment-outcome associations in pharmacoepidemiologic studies that use healthcare claims data. How hd-PS performance is affected by aggregating medications or medical diagnoses has not been assessed. Methods We evaluated the effects of aggregating medications or diagnoses on hd-PS performance in an empirical example using resampled cohorts with small sample size, rare outcome incidence, or low exposure prevalence. In a cohort study comparing the risk of upper gastrointestinal complications in celecoxib or traditional NSAIDs (diclofenac, ibuprofen) initiators with rheumatoid arthritis and osteoarthritis, we (1) aggregated medications and International Classification of Diseases-9 (ICD-9) diagnoses into hierarchies of the Anatomical Therapeutic Chemical classification (ATC) and the Clinical Classification Software (CCS), respectively, and (2) sampled the full cohort using techniques validated by simulations to create 9,600 samples to compare 16 aggregation scenarios across 50% and 20% samples with varying outcome incidence and exposure prevalence. We applied hd-PS to estimate relative risks (RR) using 5 dimensions, predefined confounders, ≤ 500 hd-PS covariates, and propensity score deciles. For each scenario, we calculated: (1) the geometric mean RR; (2) the difference between the scenario mean ln(RR) and the ln(RR) from published randomized controlled trials (RCT); and (3) the proportional difference in the degree of estimated confounding between that scenario and the base scenario (no aggregation). Results Compared with the base scenario, aggregations of medications into ATC level 4 alone or in combination with aggregation of diagnoses into CCS level 1 improved the hd-PS confounding adjustment in most scenarios, reducing residual confounding compared with the RCT findings by up to 19%. Conclusions Aggregation of codes using hierarchical coding
NASA Technical Reports Server (NTRS)
Wehrbein, W. M.; Leovy, C. B.
1981-01-01
A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.
Sheng, I. C.; Kuan, C. K.; Chen, Y. T.; Yang, J. Y.; Hsiung, G. Y.; Chen, J. R.
2010-06-23
The pressure distribution is an important aspect of a UHV subsystem in either a storage ring or a front end. The design of the 3-GeV, 400-mA Taiwan Photon Source (TPS) foresees outgassing induced by photons and due to a bending magnet and an insertion device. An algorithm to calculate the photon-stimulated absorption (PSD) due to highly energetic radiation from a synchrotron source is presented. Several results using undulator sources such as IU20 are also presented, and the pressure distribution is illustrated.
NASA Astrophysics Data System (ADS)
Saito, Kyosuke; Tanabe, Tadao; Oyama, Yutaka
2016-04-01
We have presented a numerical analysis to describe the behavior of a second harmonic generation (SHG) in THz regime by taking into account for both linear and nonlinear optical susceptibility. We employed a nonlinear finite-difference-time-domain (nonlinear FDTD) method to simulate SHG output characteristics in THz photonic crystal waveguide based on semi insulating gallium phosphide crystal. Unique phase matching conditions originated from photonic band dispersions with low group velocity are appeared, resulting in SHG output characteristics. This numerical study provides spectral information of SHG output in THz PC waveguide. THz PC waveguides is one of the active nonlinear optical devices in THz regime, and nonlinear FDTD method is a powerful tool to design photonic nonlinear THz devices.
NASA Astrophysics Data System (ADS)
Leblanc, James
In this talk we present numerical results for ground state and excited state properties (energies, double occupancies, and Matsubara-axis self energies) of the single-orbital Hubbard model on a two-dimensional square lattice. In order to provide an assessment of our ability to compute accurate results in the thermodynamic limit we employ numerous methods including auxiliary field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock. We illustrate cases where agreement between different methods is obtained in order to establish benchmark results that should be useful in the validation of future results.
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1976-01-01
An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.
NASA Astrophysics Data System (ADS)
Motheau, E.; Abraham, J.
2016-05-01
A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by [1], which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.
A new SPECT reconstruction algorithm based on the Novikov explicit inversion formula
NASA Astrophysics Data System (ADS)
Kunyansky, Leonid A.
2001-04-01
We present a new reconstruction algorithm for single-photon emission computed tomography. The algorithm is based on the Novikov explicit inversion formula for the attenuated Radon transform with non-uniform attenuation. Our reconstruction technique can be viewed as a generalization of both the filtered backprojection algorithm and the Tretiak-Metz algorithm. We test the performance of the present algorithm in a variety of numerical experiments. Our numerical examples show that the algorithm is capable of accurate image reconstruction even in the case of strongly non-uniform attenuation coefficient, similar to that occurring in a human thorax.
NASA Astrophysics Data System (ADS)
Haney, M. M.; Aldridge, D. F.; Symons, N. P.
2005-12-01
Numerical solution of partial differential equations by explicit, time-domain, finite-difference (FD) methods entails approximating temporal and spatial derivatives by discrete function differences. Thus, the solution of the difference equation will not be identical to the solution of the underlying differential equation. Solution accuracy degrades if temporal and spatial gridding intervals are too large. Overly coarse spatial gridding leads to spurious artifacts in the calculated results referred to as numerical dispersion, whereas coarse temporal sampling may produce numerical instability (manifest as unbounded growth in the calculations as FD timestepping proceeds). Quantitative conditions for minimizing dispersion and avoiding instability are developed by deriving the dispersion relation appropriate for the discrete difference equation (or coupled system of difference equations) under examination. A dispersion relation appropriate for FD solution of the 3D velocity-stress system of isotropic elastodynamics, on staggered temporal and spatial grids, is developed. The relation applies to either compressional or shear wave propagation, and reduces to the proper form for acoustic propagation in the limit of vanishing shear modulus. A stability condition and a plane-wave phase-speed formula follow as consequences of the dispersion relation. The mathematical procedure utilized for the derivation is a modern variant of classical von Neumann analysis, and involves a 4D discrete space/time Fourier transform of the nine, coupled, FD updating formulae for particle velocity vector and stress tensor components. The method is generalized to seismic wave propagation within anelastic and poroelastic media, as well as sound wave propagation within a uniformly-moving atmosphere. A significant extension of the approach yields a stability condition for wave propagation across an interface between dissimilar media with strong material contrast (e.g., the earth's surface, the seabed
NASA Astrophysics Data System (ADS)
Mering, Catherine; Chorowicz, Jean; Vicente, Jean-Claude; Chalah, Cherif; Rafalli, Gaelle
1995-11-01
Usually the analysis of high resolution satellite images such as radar SAR ERS-1 images is undertaken by photo-interpretation techniques in order to reveal geological features. The numerical image processing is based on a filtering method designed for a better identification of geological structures on SAR images. The method leads to a mapping of recent faults on which the vertical offset is quantified. As examples, steeply dipping active faults with abrupt scarps are extracted from SAR-ERS1 images of the Central Andes (Atacama Fault zone, Northern Chile). The fault throws are then evaluated with a specific numerical image processing.
Nourgaliev R.; Knoll D.; Mousseau V.; Berry R.
2007-04-01
The state-of-the-art for Direct Numerical Simulation (DNS) of boiling multiphase flows is reviewed, focussing on potential of available computational techniques, the level of current success for their applications to model several basic flow regimes (film, pool-nucleate and wall-nucleate boiling -- FB, PNB and WNB, respectively). Then, we discuss multiphysics and multiscale nature of practical boiling flows in LWR reactors, requiring high-fidelity treatment of interfacial dynamics, phase-change, hydrodynamics, compressibility, heat transfer, and non-equilibrium thermodynamics and chemistry of liquid/vapor and fluid/solid-wall interfaces. Finally, we outline the framework for the {\\sf Fervent} code, being developed at INL for DNS of reactor-relevant boiling multiphase flows, with the purpose of gaining insight into the physics of multiphase flow regimes, and generating a basis for effective-field modeling in terms of its formulation and closure laws.
NASA Astrophysics Data System (ADS)
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia-Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan-Wen; Millis, Andrew J.; Prokof'ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo-Xiao; Zhu, Zhenyue; Gull, Emanuel; Simons Collaboration on the Many-Electron Problem
2015-10-01
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.
Grover's algorithm and the secant varieties
NASA Astrophysics Data System (ADS)
Holweck, Frédéric; Jaffali, Hamza; Nounouh, Ismaël
2016-09-01
In this paper we investigate the entanglement nature of quantum states generated by Grover's search algorithm by means of algebraic geometry. More precisely we establish a link between entanglement of states generated by the algorithm and auxiliary algebraic varieties built from the set of separable states. This new perspective enables us to propose qualitative interpretations of earlier numerical results obtained by M. Rossi et al. We also illustrate our purpose with a couple of examples investigated in details.
Simple algorithm for computing the geometric measure of entanglement
Streltsov, Alexander; Kampermann, Hermann; Bruss, Dagmar
2011-08-15
We present an easy implementable algorithm for approximating the geometric measure of entanglement from above. The algorithm can be applied to any multipartite mixed state. It involves only the solution of an eigenproblem and finding a singular value decomposition; no further numerical techniques are needed. To provide examples, the algorithm was applied to the isotropic states of three qubits and the three-qubit XX model with external magnetic field.
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; Kozik, E.; Liu, Xuan -Wen; Millis, Andrew J.; Prokof’ev, N. V.; Qin, Mingpu; Scuseria, Gustavo E.; Shi, Hao; Svistunov, B. V.; Tocchio, Luca F.; Tupitsyn, I. S.; White, Steven R.; Zhang, Shiwei; Zheng, Bo -Xiao; Zhu, Zhenyue; Gull, Emanuel
2015-12-14
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.
LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; Bulik, Ireneusz W.; Chan, Garnet Kin-Lic; Chung, Chia -Min; Deng, Youjin; Ferrero, Michel; Henderson, Thomas M.; Jiménez-Hoyos, Carlos A.; et al
2015-12-14
Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less
Parallel projected variable metric algorithms for unconstrained optimization
NASA Technical Reports Server (NTRS)
Freeman, T. L.
1989-01-01
The parallel variable metric optimization algorithms of Straeter (1973) and van Laarhoven (1985) are reviewed, and the possible drawbacks of the algorithms are noted. By including Davidon (1975) projections in the variable metric updating, researchers can generalize Straeter's algorithm to a family of parallel projected variable metric algorithms which do not suffer the above drawbacks and which retain quadratic termination. Finally researchers consider the numerical performance of one member of the family on several standard example problems and illustrate how the choice of the displacement vectors affects the performance of the algorithm.
Farquhar, J.; Chacko, T. . Dept. of Geology); Frost, B.R. )
1992-01-01
The Sybille Pit is a late-stage magnetite-ilmenite-plagioclase-bearing differentiate of the Laramie Anorthosite with a wide range of grain sizes and modal mineralogy. This variability makes Sybille an ideal locality in which to study the factors that affect isotopic thermometry in plutonic environments. The authors have developed a numerical model based on isotope exchange trajectories that retrieves close to magmatic temperatures for samples from Sybille. This method is based on the premise that hand sample-scale sub-systems close to exchange with each other at temperatures that exceed those of the constituent minerals. The temperature of hand-sample scale closure is retrieved by back calculating the isotope exchange trajectories to the temperature at which two samples with widely different model compositions are in isotopic equilibrium. Application of these methods to samples from Sybille provides promising results. Whereas conventional isotopic thermometry of individual samples yields a wide range of temperatures ([approximately]600 to > 1000 C) depending on the mineral-pair chosen, application of this numerical model to multiple samples yields temperatures of 1,070 [+-] 100 C which corresponds closely to the inferred solidus for these rocks.
NASA Astrophysics Data System (ADS)
Yang, Jianwen; Bull, Stuart; Large, Ross
2004-10-01
This paper presents the first hydrogeological model that fully couples transient fluid flow, heat and solute transport associated with the formation of the HYC SEDEX deposit in the McArthur Basin, northern Australia. Numerical results reveal that salinity plays an important role in controlling hydrothermal fluid migration. In particular, it appears that it is the distribution of evaporitic units within a given basin, rather than their absolute abundance, that controls the development of free convection. Relatively saline conditions at the seafloor strengthen the thermally-induced buoyancy force and hence promote free convection of basinal solutions; whereas high salinities at the bottom counteract the thermal function of natural geothermal gradient and suppress the development of convective hydrothermal fluid circulation. In the latter case, higher thermal gradients are required to initiate substantial free convective fluid flow. Numerical experiments also suggest the position of an ore body with respect to its vent system may be controlled by the spatial and temporal salinity distributions in the basin. Vent-distal ore formation, a result of exhalation of brines that are denser than seawater and hence can flow away from the vent region, is promoted by moderate salinity at the seafloor and higher salinity in the aquifer. Vent-proximal ore accumulation, a result of pluming upon exhalation of brines less dense than seawater, is favored by the highest salinity conditions occurring near the level of the seafloor.
Numerical Asymptotic Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1992-01-01
Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.
NASA Astrophysics Data System (ADS)
Cebriá, J. M.; Martín-Escorza, C.; López-Ruiz, J.; Morán-Zenteno, D. J.; Martiny, B. M.
2011-04-01
Identification of geological lineaments using numerical methods is a useful tool to reveal structures that may not be evident to the naked eye. In this sense, monogenetic volcanic fields represent an especially suitable case for the application of such techniques, since eruptive vents can be considered as point-like features. Application of a two-point azimuth method to the Michoacán-Guanajuato Volcanic Field (Mexico) and the Calatrava Volcanic Province (Spain) demonstrates that the main lineaments controlling the distributions of volcanic vents (~ 322° in Calatrava and ~ 30° in Michoacán) approach the respective main compressional axes that dominate in the area (i.e. the Cocos-North America plates convergence and the main Betics compressional direction, respectively). Considering the stress fields that are present in each volcanic area and their respective geodynamic history, it seems that although volcanism may be a consequence of contemporaneous extensional regimes, the distribution of the volcanic vents in these kinds of monogenetic fields is actually controlled by reactivation of older fractures which then become more favourable for producing space for magma ascent at near-surface levels.
NASA Astrophysics Data System (ADS)
Hu, Xuanyu; Jekeli, Christopher
2015-02-01
We present a comprehensive numerical analysis of spherical, spheroidal, and ellipsoidal harmonic series for gravitational field modeling near small moderately irregular bodies, such as the Martian moons. The comparison of model performances for these bodies is less intuitive and distinct than for a highly irregular object, such as Eros. The harmonic series models are each associated with a distinct surface, i.e., the Brillouin sphere, spheroid, or ellipsoid, which separates the regions of convergence and possible divergence for the parent infinite series. In their convergence regions, the models are subject only to omission errors representing the residual field variations not accounted for by the finite degree expansions. In the regions inside their respective Brillouin surfaces, the models are susceptible to amplification of omission errors and possible divergence effects, where the latter can be discerned if the error increases with an increase in the maximum degree of the model. We test the harmonic series models on the Martian moons, Phobos and Deimos, with moderate oblateness of 0.4. The possible divergence effects and amplified omission errors of the models are illustrated and quantified. The three models yield consistent results on a bounding sphere of Phobos in their common convergence region, with relative errors in potential of 0.01 and 0.001 % for expansions up to degree 10 and degree 20 respectively. On the surface of Phobos, the spherical and spheroidal models up to degree 10 both have maximum relative errors of 1 % in potential and 100 % in acceleration due ostensibly to divergence effect. Their performances deteriorate more severely on the more irregular Deimos. The ellipsoidal model exhibits much less distinct divergence behavior and proves more reliable in modeling both potential and acceleration, with respective maximum relative errors of 1 and 10 %, on both bodies. Our results show that for the Martian moons and other such moderately irregular
NASA Astrophysics Data System (ADS)
Benaïchouche, Abed; Stab, Olivier; Tessier, Bruno; Cojan, Isabelle
2016-01-01
In landscapes dominated by fluvial erosion, the landscape morphology is closely related to the hydrographic network system. In this paper, we investigate the hydrographic network reorganization caused by a headward piracy mechanism between two drainage basins in France, the Meuse and the Moselle. Several piracies occurred in the Meuse basin during the past one million years, and the basin's current characteristics are favorable to new piracies by the Moselle river network. This study evaluates the consequences over the next several million years of a relative lowering of the Moselle River (and thus of its basin) with respect to the Meuse River. The problem is addressed with a numerical modeling approach (landscape evolution model, hereafter LEM) that requires empirical determinations of parameters and threshold values. Classically, fitting of the parameters is based on analysis of the relationship between the slope and the drainage area and is conducted under the hypothesis of equilibrium. Application of this conventional approach to the capture issue yields incomplete results that have been consolidated by a parametric sensitivity analysis. The LEM equations give a six-dimensional parameter space that was explored with over 15,000 simulations using the landscape evolution model GOLEM. The results demonstrate that stream piracies occur in only four locations in the studied reach near the city of Toul. The locations are mainly controlled by the local topography and are model-independent. Nevertheless, the chronology of the captures depends on two parameters: the river concavity (given by the fluvial advection equation) and the hillslope erosion factor. Thus, the simulations lead to three different scenarios that are explained by a phenomenon of exclusion or a string of events.
NASA Astrophysics Data System (ADS)
Hässig, Marc; Duretz, Thibault; Rolland, Yann; Sosson, Marc
2016-05-01
The ophiolites of NE Anatolia and of the Lesser Caucasus (NALC) evidence an obduction over ∼200 km of oceanic lithosphere of Middle Jurassic age (c. 175-165 Ma) along an entire tectonic boundary (>1000 km) at around 90 Ma. The obduction process is characterized by four first order geological constraints: Ophiolites represent remnants of a single ophiolite nappe currently of only a few kilometres thick and 200 km long. The oceanic crust was old (∼80 Ma) at the time of its obduction. The presence of OIB-type magmatism emplaced up to 10 Ma prior to obduction preserved on top of the ophiolites is indicative of mantle upwelling processes (hotspot). The leading edge of the Taurides-Anatolides, represented by the South Armenian Block, did not experience pressures exceeding 0.8 GPa nor temperatures greater than ∼300 °C during underthrusting below the obducting oceanic lithosphere. An oceanic domain of a maximum 1000 km (from north to south) remained between Taurides-Anatolides and Pontides-Southern Eurasian Margin after the obduction. We employ two-dimensional thermo-mechanical numerical modelling in order to investigate obduction dynamics of a re-heated oceanic lithosphere. Our results suggest that thermal rejuvenation (i.e. reheating) of the oceanic domain, tectonic compression, and the structure of the passive margin are essential ingredients for enabling obduction. Afterwards, extension induced by far-field plate kinematics (subduction below Southern Eurasian Margin), facilitates the thinning of the ophiolite, the transport of the ophiolite on the continental domain, and the exhumation of continental basement through the ophiolite. The combined action of thermal rejuvenation and compression are ascribed to a major change in tectonic motions occurring at 110-90 Ma, which led to simultaneous obductions in the Oman (Arabia) and NALC regions.
Numerical solution of hybrid fuzzy differential equations using improved predictor-corrector method
NASA Astrophysics Data System (ADS)
Kim, Hyunsoo; Sakthivel, Rathinasamy
2012-10-01
The hybrid fuzzy differential equations have a wide range of applications in science and engineering. This paper considers numerical solution for hybrid fuzzy differential equations. The improved predictor-corrector method is adapted and modified for solving the hybrid fuzzy differential equations. The proposed algorithm is illustrated by numerical examples and the results obtained using the scheme presented here agree well with the analytical solutions. The computer symbolic systems such as Maple and Mathematica allow us to perform complicated calculations of algorithm.
NASA Astrophysics Data System (ADS)
Carlino, Stefano; Troiano, Antonio; Giulia Di Giuseppe, Maria; Tramelli, Anna; Troise, Claudia; Somma, Renato; De Natale, Giuseppe
2015-04-01
The active volcanic area of Campi Flegrei caldera has been the site of many geothermal investigations, since the early XX century. This caldera is characterised by high heat flow, with maximum value > 150 mWm-2, geothermal gradients larger than 200°Ckm-1 and diffuse magmatic gases discharge at the surface. These features encouraged an extensive campaign for geothermal investigation, started in 1939, with many drillings performed at Campanian volcanoes (Campi Flegrei and Ischia) and later at Vesuvius. Several wells aimed to the exploitation of high enthalpy geothermal energy, were drilled in the Campi Flegrei caldera, down to a maximum depth of ~3 km involving mainly two sites (Mofete and S.Vito geothermal fields) located in western and northern sector of caldera respectively. The most interesting site for geothermal exploitation was the Mofete zone, where a number of 4 productive wells were drilled and tested to produce electrical power. Based on data inferred from the productive tests it was established a potential electrical extractable power from Mofete field of at least 10MWe. More recently an empirical evaluation of the whole geothermal potential of the caldera provides a value of more than 1 GWe. The results of AGIP-ENEL exploration at Campi Flegrei highlighted the feasibility of geothermal exploitation. Here, we show for the first time the results of numerical simulations (TOUGH2 code ®) of fluids extraction and reinjection from the Mofete geothermal field, in order to produce at least 5MWe from zero emission power plant (Organic Rankine Cycle type). The simulation is aimed to understand the perturbation of the geothermal reservoir in terms of temperature, pressure change, and possible related seismicity, after different simulated time of exploitation. The modeling is mainly constrained by the data derived from geothermal exploration and productive tests performed since 1979 by AGIP-ENEL Companies. A general assessment of the maximum potential magnitude
Local multiplicative Schwarz algorithms for convection-diffusion equations
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Sarkis, Marcus
1995-01-01
We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.
Algorithmic Differentiation for Calculus-based Optimization
NASA Astrophysics Data System (ADS)
Walther, Andrea
2010-10-01
For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
Analysis of coaxial spray combustion flames and related numerical issues
NASA Technical Reports Server (NTRS)
Liang, P. Y.
1986-01-01
An approach to the simulation of strongly coupled multiphase flows in combustion hardware is sketched and its unique requirements highlighted. An example of a successful application to a coaxial injector flame is presented. Furthermore, several numerical issues that tend to interact with the physics of the problem are discussed with special regard to their potential impact on the choices of numerical parameters by the analyst. These include the issues of stability, numerical diffusivity, stiffness, and boundary conditions. The theme of this paper focuses on the intriguing relationships among the grid, the solution algorithm, and the actual physical mechanisms themselves.
Fast algorithm for relaxation processes in big-data systems
NASA Astrophysics Data System (ADS)
Hwang, S.; Lee, D.-S.; Kahng, B.
2014-10-01
Relaxation processes driven by a Laplacian matrix can be found in many real-world big-data systems, for example, in search engines on the World Wide Web and the dynamic load-balancing protocols in mesh networks. To numerically implement such processes, a fast-running algorithm for the calculation of the pseudoinverse of the Laplacian matrix is essential. Here we propose an algorithm which computes quickly and efficiently the pseudoinverse of Markov chain generator matrices satisfying the detailed-balance condition, a general class of matrices including the Laplacian. The algorithm utilizes the renormalization of the Gaussian integral. In addition to its applicability to a wide range of problems, the algorithm outperforms other algorithms in its ability to compute within a manageable computing time arbitrary elements of the pseudoinverse of a matrix of size millions by millions. Therefore our algorithm can be used very widely in analyzing the relaxation processes occurring on large-scale networked systems.
A Stochastic Collocation Algorithm for Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)
2003-01-01
This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.
NASA Astrophysics Data System (ADS)
Razali, Azhani Mohd; Abdullah, Jaafar
2015-04-01
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
Razali, Azhani Mohd Abdullah, Jaafar
2015-04-29
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Markov chain Monte Carlo methods: an introductory example
NASA Astrophysics Data System (ADS)
Klauenberg, Katy; Elster, Clemens
2016-02-01
When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.
Flavin, C.; O`Meara, M.
1997-05-01
Creative financing for setting up individual solar power systems and energy efficient appliances is beginning to come of age in developing countries. This article describes the practical implementation of such solar energy financing as well as the broader implications, using India, Indonesia and the Dominican Republic as examples. Also included is a discussion of the government and publically supported organizations which are encouraging solar energy use and realistic financing.
Some numerical aspects of the training problem for feed-forward neural nets.
Hall, Gary; Stella, Fabio; J McKeown, John
1997-11-01
This paper considers the feed-forward training problem from the numerical point of view, in particular the conditioning of the problem. It is well known that the feed-forward training problem is often ill-conditioned; this affects the behaviour of training algorithms, the choice of such algorithms and the quality of the solutions achieved. A geometric interpretation of ill-conditioning is explored and an example of function approximation is analysed in detail.
An implicit algorithm for a rate-dependent ductile failure model
NASA Astrophysics Data System (ADS)
Zuo, Q. H.; Rice, Jeremy R.
2008-10-01
An implicit numerical algorithm has been developed for a rate-dependent model for damage and failure of ductile materials under high-rate dynamic loading [F. L. Addessio and J. N. Johnson, J. Appl. Phys. 74, 1640 (1993)]. Over each time step, the algorithm first implicitly determines the equilibrium state on a Gurson surface, and then calculates the final state by solving viscous relaxation equations, also implicitly. Numerical examples are given to demonstrate the key features of the algorithm. Compared to the explicit algorithm used previously, the current algorithm allows significantly larger time steps that can be used in the analysis. As the viscosity of the material vanishes, the results of the rate-dependent model are shown here to converge to that of the corresponding rate-independent model, a result not achieved with the explicit algorithm.
Boundary acquisition for setup of numerical simulation
Diegert, C.
1997-12-31
The author presents a work flow diagram that includes a path that begins with taking experimental measurements, and ends with obtaining insight from results produced by numerical simulation. Two examples illustrate this path: (1) Three-dimensional imaging measurement at micron scale, using X-ray tomography, provides information on the boundaries of irregularly-shaped alumina oxide particles held in an epoxy matrix. A subsequent numerical simulation predicts the electrical field concentrations that would occur in the observed particle configurations. (2) Three-dimensional imaging measurement at meter scale, again using X-ray tomography, provides information on the boundaries fossilized bone fragments in a Parasaurolophus crest recently discovered in New Mexico. A subsequent numerical simulation predicts acoustic response of the elaborate internal structure of nasal passageways defined by the fossil record. The author must both add value, and must change the format of the three-dimensional imaging measurements before the define the geometric boundary initial conditions for the automatic mesh generation, and subsequent numerical simulation. The author applies a variety of filters and statistical classification algorithms to estimate the extents of the structures relevant to the subsequent numerical simulation, and capture these extents as faceted geometries. The author will describe the particular combination of manual and automatic methods used in the above two examples.
NASA Astrophysics Data System (ADS)
Pei, Chengquan; Tian, Jinshou; Wu, Shengli; He, Jiai; Liu, Zhen
2016-10-01
The transient response is of great influence on the electromagnetic compatibility of synchronous scanning streak cameras (SSSCs). In this paper we propose a numerical method to evaluate the transient response of the scanning deflection plate (SDP). First, we created a simplified circuit model for the SDP used in an SSSC, and then derived the Baum-Liu-Tesche (BLT) equation in the frequency domain. From the frequency-domain BLT equation, its transient counterpart was derived. These parameters, together with the transient-BLT equation, were used to compute the transient load voltage and load current, and then a novel numerical method to fulfill the continuity equation was used. Several numerical simulations were conducted to verify this proposed method. The computed results were then compared with transient responses obtained by a frequency-domain/fast Fourier transform (FFT) method, and the accordance was excellent for highly conducting cables. The benefit of deriving the BLT equation in the time domain is that it may be used with slight modifications to calculate the transient response and the error can be controlled by a computer program. The result showed that the transient voltage was up to 1000 V and the transient current was approximately 10 A, so some protective measures should be taken to improve the electromagnetic compatibility.
Distributed parameter estimation in unreliable sensor networks via broadcast gossip algorithms.
Wang, Huiwei; Liao, Xiaofeng; Wang, Zidong; Huang, Tingwen; Chen, Guo
2016-01-01
In this paper, we present an asynchronous algorithm to estimate the unknown parameter under an unreliable network which allows new sensors to join and old sensors to leave, and can tolerate link failures. Each sensor has access to partially informative measurements when it is awakened. In addition, the proposed algorithm can avoid the interference among messages and effectively reduce the accumulated measurement and quantization errors. Based on the theory of stochastic approximation, we prove that our proposed algorithm almost surely converges to the unknown parameter. Finally, we present a numerical example to assess the performance and the communication cost of the algorithm.
Advances in Numerical Boundary Conditions for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
1997-01-01
Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.
Randomized approximate nearest neighbors algorithm.
Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir
2011-09-20
We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.
A frictional sliding algorithm for liquid droplets
NASA Astrophysics Data System (ADS)
Sauer, Roger A.
2016-08-01
This work presents a new frictional sliding algorithm for liquid menisci in contact with solid substrates. In contrast to solid-solid contact, the liquid-solid contact behavior is governed by the contact line, where a contact angle forms and undergoes hysteresis. The new algorithm admits arbitrary meniscus shapes and arbitrary substrate roughness, heterogeneity and compliance. It is discussed and analyzed in the context of droplet contact, but it also applies to liquid films and solids with surface tension. The droplet is modeled as a stabilized membrane enclosing an incompressible medium. The contact formulation is considered rate-independent such that hydrostatic conditions apply. Three distinct contact algorithms are needed to describe the cases of frictionless surface contact, frictionless line contact and frictional line contact. For the latter, a predictor-corrector algorithm is proposed in order to enforce the contact conditions at the contact line and thus distinguish between the cases of advancing, pinning and receding. The algorithms are discretized within a monolithic finite element formulation. Several numerical examples are presented to illustrate the numerical and physical behavior of sliding droplets.
Annealed Importance Sampling Reversible Jump MCMC algorithms
Karagiannis, Georgios; Andrieu, Christophe
2013-03-20
It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.
Numerical integration of systems of delay differential-algebraic equations
NASA Astrophysics Data System (ADS)
Kuznetsov, E. B.; Mikryukov, V. N.
2007-01-01
The numerical solution of the initial value problem for a system of delay differential-algebraic equations is examined in the framework of the parametric continuation method. Necessary and sufficient conditions are obtained for transforming this problem to the best argument, which ensures the best condition for the corresponding system of continuation equations. The best argument is the arc length along the integral curve of the problem. Algorithms and programs based on the continuous and discrete continuation methods are developed for the numerical integration of this problem. The efficiency of the suggested transformation is demonstrated using test examples.
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Xiao, Jianyuan; Liu, Jian; Qin, Hong; Yu, Zhi
2013-10-15
Smoothing functions are commonly used to reduce numerical noise arising from coarse sampling of particles in particle-in-cell (PIC) plasma simulations. When applying smoothing functions to symplectic algorithms, the conservation of symplectic structure should be guaranteed to preserve good conservation properties. In this paper, we show how to construct a variational multi-symplectic PIC algorithm with smoothing functions for the Vlasov-Maxwell system. The conservation of the multi-symplectic structure and the reduction of numerical noise make this algorithm specifically suitable for simulating long-term dynamics of plasmas, such as those in the steady-state operation or long-pulse discharge of a super-conducting tokamak. The algorithm has been implemented in a 6D large scale PIC code. Numerical examples are given to demonstrate the good conservation properties of the multi-symplectic algorithm and the reduction of the noise due to the application of smoothing function.
A numerical method for DNS/LES of turbulent reacting flows
Doom, Jeff; Hou, Yucheng; Mahesh, Krishnan
2007-09-10
A spatially non-dissipative, implicit numerical method to simulate turbulent reacting flows over a range of Mach numbers, is described. The compressible Navier-Stokes equations are rescaled so that the zero Mach number equations are discretely recovered in the limit of zero Mach number. The dependent variables are co-located in space, and thermodynamic variables are staggered from velocity in time. The algorithm discretely conserves kinetic energy in the incompressible, inviscid, non-reacting limit. The chemical source terms are implicit in time to allow for stiff chemical mechanisms. The algorithm is readily extended to complex chemical mechanisms. Numerical examples using both simple and complex chemical mechanisms are presented.
Di Pierro, Michele; Elber, Ron; Leimkuhler, Benedict
2015-12-01
We present an algorithm termed COMPEL (COnstant Molecular Pressure with Ewald sum for Long range forces) to conduct simulations in the NPT ensemble. The algorithm combines novel features recently proposed in the literature to obtain a highly efficient and accurate numerical integrator. COMPEL exploits the concepts of molecular pressure, rapid stochastic relaxation to equilibrium, exact calculation of the contribution to the pressure of long-range nonbonded forces with Ewald summation, and the use of Trotter expansion to generate a robust, highly stable, symmetric, and accurate algorithm. Explicit implementation in the MOIL program and illustrative numerical examples are discussed. PMID:26616351
Di Pierro, Michele; Elber, Ron; Leimkuhler, Benedict
2015-12-01
We present an algorithm termed COMPEL (COnstant Molecular Pressure with Ewald sum for Long range forces) to conduct simulations in the NPT ensemble. The algorithm combines novel features recently proposed in the literature to obtain a highly efficient and accurate numerical integrator. COMPEL exploits the concepts of molecular pressure, rapid stochastic relaxation to equilibrium, exact calculation of the contribution to the pressure of long-range nonbonded forces with Ewald summation, and the use of Trotter expansion to generate a robust, highly stable, symmetric, and accurate algorithm. Explicit implementation in the MOIL program and illustrative numerical examples are discussed.
NASA Astrophysics Data System (ADS)
Ku, B.; Nam, M.
2012-12-01
Neutron logging has been widely used to estimate neutron porosity to evaluate formation properties in oil industry. More recently, neutron logging has been highlighted for monitoring the behavior of CO2 injected into reservoir for geological CO2 sequestration. For a better understanding of neutron log interpretation, Monte Carlo N-Particle (MCNP) algorithm is used to illustrate the response of a neutron tool. In order to obtain calibration curves for the neutron tool, neutron responses are simulated in water-filled limestone, sandstone and dolomite formations of various porosities. Since the salinities (concentration of NaCl) of borehole fluid and formation water are important factors for estimating formation porosity, we first compute and analyze neutron responses for brine-filled formations with different porosities. Further, we consider changes in brine saturation of a reservoir due to hydrocarbon production or geological CO2 sequestration to simulate corresponding neutron logging data. As gas saturation decreases, measured neutron porosity confirms gas effects on neutron logging, which is attributed to the fact that gas has slightly smaller number of hydrogen than brine water. In the meantime, increase in CO2 saturation due to CO2 injection reduces measured neutron porosity giving a clue to estimation the CO2 saturation, since the injected CO2 substitute for the brine water. A further analysis on the reduction gives a strategy for estimating CO2 saturation based on time-lapse neutron logging. This strategy can help monitoring not only geological CO2 sequestration but also CO2 flood for enhanced-oil-recovery. Acknowledgements: This work was supported by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2012T100201588). Myung Jin Nam was partially supported by the National Research Foundation of Korea(NRF) grant funded by the Korea
NASA Technical Reports Server (NTRS)
Farhat, C.; Park, K. C.; Dubois-Pelerin, Y.
1991-01-01
An unconditionally stable second order accurate implicit-implicit staggered procedure for the finite element solution of fully coupled thermoelasticity transient problems is proposed. The procedure is stabilized with a semi-algebraic augmentation technique. A comparative cost analysis reveals the superiority of the proposed computational strategy to other conventional staggered procedures. Numerical examples of one and two-dimensional thermomechanical coupled problems demonstrate the accuracy of the proposed numerical solution algorithm.
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Park, K. C.; Dubois-Pelerin, Yves
1991-01-01
An unconditionally stable second order accurate implicit-implicit staggered procedure for the finite element solution of fully coupled thermoelasticity transient problems is proposed. The procedure is stabilized with a semi-algebraic augmentation technique. A comparative cost analysis reveals the superiority of the proposed computational strategy to other conventional staggered procedures. Numerical examples of one- and two-dimensional thermomechanical coupled problems demonstrate the accuracy of the proposed numerical solution algorithm.
A Hybrid Shortest Path Algorithm for Navigation System
NASA Astrophysics Data System (ADS)
Cho, Hsun-Jung; Lan, Chien-Lun
2007-12-01
Combined with Geographic Information System (GIS) and Global Positioning System (GPS), the vehicle navigation system had become a quite popular product in daily life. A key component of the navigation system is the Shortest Path Algorithm. Navigation in real world must face a network consists of tens of thousands nodes and links, and even more. Under the limited computation capability of vehicle navigation equipment, it is difficult to satisfy the realtime response requirement that user expected. Hence, this study focused on shortest path algorithm that enhances the computation speed with less memory requirement. Several well-known algorithms such as Dijkstra, A* and hierarchical concepts were integrated to build hybrid algorithms that reduce searching space and improve searching speed. Numerical examples were conducted on Taiwan highway network that consists of more than four hundred thousands of links and nearly three hundred thousands of nodes. This real network was divided into two connected sub-networks (layers). The upper layer is constructed by freeways and expressways; the lower layer is constructed by local networks. Test origin-destination pairs were chosen randomly and divided into three distance categories; short, medium and long distances. The evaluation of outcome is judged by actual length and travel time. The numerical example reveals that the hybrid algorithm proposed by this research might be tens of thousands times faster than traditional Dijkstra algorithm; the memory requirement of the hybrid algorithm is also much smaller than the tradition algorithm. This outcome shows that this proposed algorithm would have an advantage over vehicle navigation system.
Newton algorithm for fitting transfer functions to frequency response measurements
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Mingori, D. L.
1993-01-01
In this paper the problem of synthesizing transfer functions from frequency response measurements is considered. Given a complex vector representing the measured frequency response of a physical system, a transfer function of specified order is determined that minimizes the sum of the magnitude-squared of the frequency response errors. This nonlinear least squares minimization problem is solved by an iterative global descent algorithm of the Newton type that converges quadratically near the minimum. The unknown transfer function is expressed as a sum of second-order rational polynomials, a parameterization that facilitates a numerically robust computer implementation. The algorithm is developed for single-input, single-output, causal, stable transfer functions. Two numerical examples demonstrate the effectiveness of the algorithm.
Numerical algebraic geometry and algebraic kinematics
NASA Astrophysics Data System (ADS)
Wampler, Charles W.; Sommese, Andrew J.
In this article, the basic constructs of algebraic kinematics (links, joints, and mechanism spaces) are introduced. This provides a common schema for many kinds of problems that are of interest in kinematic studies. Once the problems are cast in this algebraic framework, they can be attacked by tools from algebraic geometry. In particular, we review the techniques of numerical algebraic geometry, which are primarily based on homotopy methods. We include a review of the main developments of recent years and outline some of the frontiers where further research is occurring. While numerical algebraic geometry applies broadly to any system of polynomial equations, algebraic kinematics provides a body of interesting examples for testing algorithms and for inspiring new avenues of work.
Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sweby, Peter K.
1997-01-01
The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.
Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem
NASA Astrophysics Data System (ADS)
Chen, Wei
2015-07-01
In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.
Algorithms for image reconstruction from projections in optical tomography
NASA Astrophysics Data System (ADS)
Zhu, Lin-Sheng; Huang, Su-Yi
1993-09-01
It is well known that the determination ofthe temperature field by holographic interferometry is a successful method in the measurement of thermophysics. In this paper some practical algorithms for image reconstruction from projections are presented to produce the temperature field. The algorithms developed consists in that the Radon transform integral equation is directly solved by grid method and that the Radon inversion formula is numerically evaluated by twodimensional Fourier transform technique. Some examples are given to verify the validity of the above methods in practice.
Thermoluminescence curves simulation using genetic algorithm with factorial design
NASA Astrophysics Data System (ADS)
Popko, E. A.; Weinstein, I. A.
2016-05-01
The evolutionary approach is an effective optimization tool for numeric analysis of thermoluminescence (TL) processes to assess the microparameters of kinetic models and to determine its effects on the shape of TL peaks. In this paper, the procedure for tuning of genetic algorithm (GA) is presented. This approach is based on multifactorial experiment and allows choosing intrinsic mechanisms of evolutionary operators which provide the most efficient algorithm performance. The proposed method is tested by considering the “one trap-one recombination center” (OTOR) model as an example and advantages for approximation of experimental TL curves are shown.
Active Learning with Irrelevant Examples
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri; Mazzoni, Dominic
2009-01-01
An improved active learning method has been devised for training data classifiers. One example of a data classifier is the algorithm used by the United States Postal Service since the 1960s to recognize scans of handwritten digits for processing zip codes. Active learning algorithms enable rapid training with minimal investment of time on the part of human experts to provide training examples consisting of correctly classified (labeled) input data. They function by identifying which examples would be most profitable for a human expert to label. The goal is to maximize classifier accuracy while minimizing the number of examples the expert must label. Although there are several well-established methods for active learning, they may not operate well when irrelevant examples are present in the data set. That is, they may select an item for labeling that the expert simply cannot assign to any of the valid classes. In the context of classifying handwritten digits, the irrelevant items may include stray marks, smudges, and mis-scans. Querying the expert about these items results in wasted time or erroneous labels, if the expert is forced to assign the item to one of the valid classes. In contrast, the new algorithm provides a specific mechanism for avoiding querying the irrelevant items. This algorithm has two components: an active learner (which could be a conventional active learning algorithm) and a relevance classifier. The combination of these components yields a method, denoted Relevance Bias, that enables the active learner to avoid querying irrelevant data so as to increase its learning rate and efficiency when irrelevant items are present. The algorithm collects irrelevant data in a set of rejected examples, then trains the relevance classifier to distinguish between labeled (relevant) training examples and the rejected ones. The active learner combines its ranking of the items with the probability that they are relevant to yield a final decision about which item
Partially linearized algorithms in gyrokinetic particle simulation
Dimits, A.M.; Lee, W.W.
1990-10-01
In this paper, particle simulation algorithms with time-varying weights for the gyrokinetic Vlasov-Poisson system have been developed. The primary purpose is to use them for the removal of the selected nonlinearities in the simulation of gradient-driven microturbulence so that the relative importance of the various nonlinear effects can be assessed. It is hoped that the use of these procedures will result in a better understanding of the transport mechanisms and scaling in tokamaks. Another application of these algorithms is for the improvement of the numerical properties of the simulation plasma. For instance, implementations of such algorithms (1) enable us to suppress the intrinsic numerical noise in the simulation, and (2) also make it possible to regulate the weights of the fast-moving particles and, in turn, to eliminate the associated high frequency oscillations. Examples of their application to drift-type instabilities in slab geometry are given. We note that the work reported here represents the first successful use of the weighted algorithms in particle codes for the nonlinear simulation of plasmas.
Efficient sequential and parallel algorithms for record linkage
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837
Coupling Algorithms for Calculating Sensitivities of Population Balances
Man, P. L. W.; Kraft, M.; Norris, J. R.
2008-09-01
We introduce a new class of stochastic algorithms for calculating parametric derivatives of the solution of the space-homogeneous Smoluchowski's coagulation equation. Currently, it is very difficult to produce low variance estimates of these derivatives in reasonable amounts of computational time through the use of stochastic methods. These new algorithms consider a central difference estimator of the parametric derivative which is calculated by evaluating the coagulation equation at two different parameter values simultaneously, and causing variance reduction by maximising the covariance between these. The two different coupling strategies ('Single' and 'Double') have been compared to the case when there is no coupling ('Independent'). Both coupling algorithms converge and the Double coupling is the most 'efficient' algorithm. For the numerical example chosen we obtain a factor of about 100 in efficiency in the best case (small system evolution time and small parameter perturbation)
Randomized Algorithms for Matrices and Data
NASA Astrophysics Data System (ADS)
Mahoney, Michael W.
2012-03-01
This chapter reviews recent work on randomized matrix algorithms. By “randomized matrix algorithms,” we refer to a class of recently developed random sampling and random projection algorithms for ubiquitous linear algebra problems such as least-squares (LS) regression and low-rank matrix approximation. These developments have been driven by applications in large-scale data analysis—applications which place very different demands on matrices than traditional scientific computing applications. Thus, in this review, we will focus on highlighting the simplicity and generality of several core ideas that underlie the usefulness of these randomized algorithms in scientific applications such as genetics (where these algorithms have already been applied) and astronomy (where, hopefully, in part due to this review they will soon be applied). The work we will review here had its origins within theoretical computer science (TCS). An important feature in the use of randomized algorithms in TCS more generally is that one must identify and then algorithmically deal with relevant “nonuniformity structure” in the data. For the randomized matrix algorithms to be reviewed here and that have proven useful recently in numerical linear algebra (NLA) and large-scale data analysis applications, the relevant nonuniformity structure is defined by the so-called statistical leverage scores. Defined more precisely below, these leverage scores are basically the diagonal elements of the projection matrix onto the dominant part of the spectrum of the input matrix. As such, they have a long history in statistical data analysis, where they have been used for outlier detection in regression diagnostics. More generally, these scores often have a very natural interpretation in terms of the data and processes generating the data. For example, they can be interpreted in terms of the leverage or influence that a given data point has on, say, the best low-rank matrix approximation; and this
Wohak, M.G.; Beer, H.
1998-05-08
A contribution toward the full numerical simulation of direct-contact evaporation of a drop rising in a hot, immiscible and less volatile liquid of higher density is presented. Based on a fixed-grid Eulerian description, the classical SOLA-VOF method is largely extended to incorporate, for example, three incompressible fluids and liquid-vapor phase change. The thorough validation and assessment process covers several benchmark simulations, some which are presented, documenting the multipurpose value of the new code. The direct-contact evaporation simulations reveal severe numerical problems that are closely related to the fixed-grid Euler formulation. As a consequence, the comparison to experiments have to be limited to the initial stage. Potential applications using several design variations can be found in waste heat recovery and reactor cooling. Furthermore, direct contact evaporators may be used in such geothermal power plants where the brines cannot be directly fed into a turbine either because of a high salt load causing severe fouling and corrosion or because of low steam fraction.
A general-purpose contact detection algorithm for nonlinear structural analysis codes
Heinstein, M.W.; Attaway, S.W.; Swegle, J.W.; Mello, F.J.
1993-05-01
A new contact detection algorithm has been developed to address difficulties associated with the numerical simulation of contact in nonlinear finite element structural analysis codes. Problems including accurate and efficient detection of contact for self-contacting surfaces, tearing and eroding surfaces, and multi-body impact are addressed. The proposed algorithm is portable between dynamic and quasi-static codes and can efficiently model contact between a variety of finite element types including shells, bricks, beams and particles. The algorithm is composed of (1) a location strategy that uses a global search to decide which slave nodes are in proximity to a master surface and (2) an accurate detailed contact check that uses the projected motions of both master surface and slave node. In this report, currently used contact detection algorithms and their associated difficulties are discussed. Then the proposed algorithm and how it addresses these problems is described. Finally, the capability of the new algorithm is illustrated with several example problems.
The hierarchical algorithms--theory and applications
NASA Astrophysics Data System (ADS)
Su, Zheng-Yao
Monte Carlo simulations are one of the most important numerical techniques for investigating statistical physical systems. Among these systems, spin models are a typical example which also play an essential role in constructing the abstract mechanism for various complex systems. Unfortunately, traditional Monte Carlo algorithms are afflicted with "critical slowing down" near continuous phase transitions and the efficiency of the Monte Carlo simulation goes to zero as the size of the lattice is increased. To combat critical slowing down, a very different type of collective-mode algorithm, in contrast to the traditional single-spin-flipmode, was proposed by Swendsen and Wang in 1987 for Potts spin models. Since then, there has been an explosion of work attempting to understand, improve, or generalize it. In these so-called "cluster" algorithms, clusters of spin are regarded as one template and are updated at each step of the Monte Carlo procedure. In implementing these algorithms the cluster labeling is a major time-consuming bottleneck and is also isomorphic to the problem of computing connected components of an undirected graph seen in other application areas, such as pattern recognition.A number of cluster labeling algorithms for sequential computers have long existed. However, the dynamic irregular nature of clusters complicates the task of finding good parallel algorithms and this is particularly true on SIMD (single-instruction-multiple-data machines. Our design of the Hierarchical Cluster Labeling Algorithm aims at alleviating this problem by building a hierarchical structure on the problem domain and by incorporating local and nonlocal communication schemes. We present an estimate for the computational complexity of cluster labeling and prove the key features of this algorithm (such as lower computational complexity, data locality, and easy implementation) compared with the methods formerly known. In particular, this algorithm can be viewed as a generalized
Numerical simulation of laminar reacting flows with complex chemistry
Day, Marcus S.; Bell, John B.
1999-12-01
We present an adaptive algorithm for low Mach number reacting flows with complex chemistry. Our approach uses a form of the low Mach number equations that discretely conserves both mass and energy. The discretization methodology is based on a robust projection formulation that accommodates large density contrasts. The algorithm uses an operator-split treatment of stiff reaction terms and includes effects of differential diffusion. The basic computational approach is embedded in an adaptive projection framework that uses structured hierarchical grids with subcycling in time that preserves the discrete conservation properties of the underlying single-grid algorithm. We present numerical examples illustrating the performance of the method on both premixed and non-premixed flames.
REQUEST: A Recursive QUEST Algorithm for Sequential Attitude Determination
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.
1996-01-01
In order to find the attitude of a spacecraft with respect to a reference coordinate system, vector measurements are taken. The vectors are pairs of measurements of the same generalized vector, taken in the spacecraft body coordinates, as well as in the reference coordinate system. We are interested in finding the best estimate of the transformation between these coordinate system.s The algorithm called QUEST yields that estimate where attitude is expressed by a quarternion. Quest is an efficient algorithm which provides a least squares fit of the quaternion of rotation to the vector measurements. Quest however, is a single time point (single frame) batch algorithm, thus measurements that were taken at previous time points are discarded. The algorithm presented in this work provides a recursive routine which considers all past measurements. The algorithm is based on on the fact that the, so called, K matrix, one of whose eigenvectors is the sought quaternion, is linerly related to the measured pairs, and on the ability to propagate K. The extraction of the appropriate eigenvector is done according to the classical QUEST algorithm. This stage, however, can be eliminated, and the computation simplified, if a standard eigenvalue-eigenvector solver algorithm is used. The development of the recursive algorithm is presented and illustrated via a numerical example.
Parallel algorithms for boundary value problems
NASA Technical Reports Server (NTRS)
Lin, Avi
1990-01-01
A general approach to solve boundary value problems numerically in a parallel environment is discussed. The basic algorithm consists of two steps: the local step where all the P available processors work in parallel, and the global step where one processor solves a tridiagonal linear system of the order P. The main advantages of this approach are two fold. First, this suggested approach is very flexible, especially in the local step and thus the algorithm can be used with any number of processors and with any of the SIMD or MIMD machines. Secondly, the communication complexity is very small and thus can be used as easily with shared memory machines. Several examples for using this strategy are discussed.
Numerical quadrature for slab geometry transport algorithms
Hennart, J.P.; Valle, E. del
1995-12-31
In recent papers, a generalized nodal finite element formalism has been presented for virtually all known linear finite difference approximations to the discrete ordinates equations in slab geometry. For a particular angular directions {mu}, the neutron flux {Phi} is approximated by a piecewise function Oh, which over each space interval can be polynomial or quasipolynomial. Here we shall restrict ourselves to the polynomial case. Over each space interval, {Phi} is a polynomial of degree k, interpolating parameters given by in the continuous and discontinuous cases, respectively. The angular flux at the left and right ends and the k`th Legendre moment of {Phi} over the cell considered are represented as.
A split finite element algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1979-01-01
An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.
Note on symmetric BCJ numerator
NASA Astrophysics Data System (ADS)
Fu, Chih-Hao; Du, Yi-Jian; Feng, Bo
2014-08-01
We present an algorithm that leads to BCJ numerators satisfying manifestly the three properties proposed by Broedel and Carrasco in [42]. We explicitly calculate the numerators at 4, 5 and 6-points and show that the relabeling property is generically satisfied.
NASA Astrophysics Data System (ADS)
Mehanee, Salah A.; Essa, Khalid S.
2015-08-01
A new two-and-a-half dimensional (2.5D) regularized inversion scheme has been developed for the interpretation of residual gravity data by a dipping thin-sheet model. This scheme solves for the characteristic inverse parameters (depth to top z, dip angle θ, extension in depth L, strike length 2 Y, and amplitude coefficient A) of a model in the space of logarithms of these parameters (log( z), log( θ), log( L), log( Y), and log(| A|)). The developed method has been successfully verified on synthetic examples without noise. The method is found stable and can estimate the inverse parameters of the buried target with acceptable accuracy when applied to data contaminated with various noise levels. However, some of the inverse parameters encountered some inaccuracy when the method was applied to synthetic data distorted by significant neighboring gravity effects/interferences. The validity of this method for practical applications has been successfully illustrated on two field examples with diverse geologic settings from mineral exploration. The estimated inverse parameters of the real data investigated are found to generally conform well with those yielded from drilling. The method is shown to be highly applicable for mineral prospecting and reconnaissance studies. It is capable of extracting the various characteristic inverse parameters that are of geologic and economic significance, and is of particular value in cases where the residual gravity data set is due to an isolated thin-sheet type buried target. The sensitivity analysis carried out on the Jacobian matrices of the field examples investigated here has shown that the parameter that can be determined with the superior accuracy is θ (as confirmed from drilling information). The parameters z, L, Y, and A can be estimated with acceptable accuracy, especially the parameters z and A. This inverse problem is non-unique. The non-uniqueness analysis and the tabulated inverse results presented here have shown that the
Algorithmic complexity and entanglement of quantum states.
Mora, Caterina E; Briegel, Hans J
2005-11-11
We define the algorithmic complexity of a quantum state relative to a given precision parameter, and give upper bounds for various examples of states. We also establish a connection between the entanglement of a quantum state and its algorithmic complexity.
Algorithms for verbal autopsies: a validation study in Kenyan children.
Quigley, M. A.; Armstrong Schellenberg, J. R.; Snow, R. W.
1996-01-01
The verbal autopsy (VA) questionnaire is a widely used method for collecting information on cause-specific mortality where the medical certification of deaths in childhood is incomplete. This paper discusses review by physicians and expert algorithms as approaches to ascribing cause of deaths from the VA questionnaire and proposes an alternative, data-derived approach. In this validation study, the relatives of 295 children who had died in hospital were interviewed using a VA questionnaire. The children were assigned causes of death using data-derived algorithms obtained under logistic regression and using expert algorithms. For most causes of death, the data-derived algorithms and expert algorithms yielded similar levels of diagnostic accuracy. However, a data-derived algorithm for malaria gave a sensitivity of 71% (95% Cl: 58-84%), which was significantly higher than the sensitivity of 47% obtained under an expert algorithm. The need for exploring this and other ways in which the VA technique can be improved are discussed. The implications of less-than-perfect sensitivity and specificity are explored using numerical examples. Misclassification bias should be taken into consideration when planning and evaluating epidemiological studies. PMID:8706229
Methods of information theory and algorithmic complexity for network biology.
Zenil, Hector; Kiani, Narsis A; Tegnér, Jesper
2016-03-01
We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdös-Rényi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs, characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Compiling quantum algorithms for architectures with multi-qubit gates
NASA Astrophysics Data System (ADS)
Martinez, Esteban A.; Monz, Thomas; Nigg, Daniel; Schindler, Philipp; Blatt, Rainer
2016-06-01
In recent years, small-scale quantum information processors have been realized in multiple physical architectures. These systems provide a universal set of gates that allow one to implement any given unitary operation. The decomposition of a particular algorithm into a sequence of these available gates is not unique. Thus, the fidelity of the implementation of an algorithm can be increased by choosing an optimized decomposition into available gates. Here, we present a method to find such a decomposition, where a small-scale ion trap quantum information processor is used as an example. We demonstrate a numerical optimization protocol that minimizes the number of required multi-qubit entangling gates by design. Furthermore, we adapt the method for state preparation, and quantum algorithms including in-sequence measurements.
Worldline numerics for energy-momentum tensors in Casimir geometries
NASA Astrophysics Data System (ADS)
Schäfer, Marco; Huet, Idrish; Gies, Holger
2016-04-01
We develop the worldline formalism for computations of composite operators such as the fluctuation induced energy-momentum tensor. As an example, we use a fluctuating real scalar field subject to Dirichlet boundary conditions. The resulting worldline representation can be evaluated by worldline Monte-Carlo methods in continuous spacetime. We benchmark this worldline numerical algorithm with the aid of analytically accessible single-plate and parallel-plate Casimir configurations, providing a detailed analysis of statistical and systematic errors. The method generalizes straightforwardly to arbitrary Casimir geometries and general background potentials.
Numerical methods for high-dimensional probability density function equations
NASA Astrophysics Data System (ADS)
Cho, H.; Venturi, D.; Karniadakis, G. E.
2016-01-01
In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.
NASA Astrophysics Data System (ADS)
Amisigo, B. A.; van de Giesen, N. C.
2005-04-01
A spatio-temporal linear dynamic model has been developed for patching short gaps in daily river runoff series. The model was cast in a state-space form in which the state variable was estimated using the Kalman smoother (RTS smoother). The EM algorithm was used to concurrently estimate both parameter and missing runoff values. Application of the model to daily runoff series in the Volta Basin of West Africa showed that the model was capable of providing good estimates of missing runoff values at a gauging station from the remaining series at the station and at spatially correlated stations in the same sub-basin.
Numerical Solution of the Variational Data Assimilation Problem Using Satellite Data
NASA Astrophysics Data System (ADS)
Agoshkov, V. I.; Lebedv, S. A.; Parmuzin, E. I.
2010-12-01
The problem of variational assimilation of satellite observational data on the ocean surface temperature is formulated and numerically investigated in order to reconstruct surface heat fluxes with the use of the global three-dimensional model of ocean hydrothermodynamics developed at the Institute of Numerical Mathematics, Russian Academy of Sciences (INM RAS), and observational data on the ocean surface temperature over the year 2004. The algorithms of the numerical solution to the problem are elaborated and substantiated, and the data assimilation block is developed and incorporated into the global three-dimensional model. Numerical experiments are carried out with the use of the Indian Ocean water area as an example. Numerical experiments confirm the theoretical conclusions obtained and demonstrate the expediency of combining the model with a block of assimilating operational observational data on the surface temperature.
NASA Astrophysics Data System (ADS)
Jaunat, J.; Dupuy, A.; Huneau, F.; Celle-Jeanton, H.; Le Coustumer, P.
2016-09-01
A numerical groundwater model of the weathered crystalline aquifer of Ursuya (a major water source for the north-western Pyrenees region, south-western France) has been computed based on monitoring of hydrological, hydrodynamic and meteorological parameters over 3 years. The equivalent porous media model was used to simulate groundwater flow in the different layers of the weathered profile: from surface to depth, the weathered layer (5 · 10-8 ≤ K ≤ 5 · 10-7 m s-1), the transition layer (7 · 10-8 ≤ K ≤ 1 · 10-5 m s-1, the highest values being along major discontinuities), two fissured layers (3.5 · 10-8 ≤ K ≤ 5 · 10-4 m s-1, depending on weathering profile conditions and on the existence of active fractures), and the hard-rock basement simulated with a negligible hydraulic conductivity ( K = 1 10 -9 ). Hydrodynamic properties of these five calculation layers demonstrate both the impact of the weathering degree and of the discontinuities on the groundwater flow. The great agreement between simulated and observed hydraulic conditions allowed for validation of the methodology and its proposed use for application on analogous aquifers. With the aim of long-term management of this strategic aquifer, the model was then used to evaluate the impact of climate change on the groundwater resource. The simulations performed according to the most pessimistic climatic scenario until 2050 show a low sensitivity of the aquifer. The decreasing trend of the natural discharge is estimated at about -360 m3 y-1 for recharge decreasing at about -5.6 mm y-1 (0.8 % of annual recharge).
ERIC Educational Resources Information Center
Siegler, Robert S.; Braithwaite, David W.
2016-01-01
In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…
Aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Murman, E. M.; Chapman, G. T.
1983-01-01
The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.
Numerical simulation of dusty plasmas
Winske, D.
1995-09-01
The numerical simulation of physical processes in dusty plasmas is reviewed, with emphasis on recent results and unresolved issues. Three areas of research are discussed: grain charging, weak dust-plasma interactions, and strong dust-plasma interactions. For each area, we review the basic concepts that are tested by simulations, present some appropriate examples, and examine numerical issues associated with extending present work.
NASA Astrophysics Data System (ADS)
Chierici, Francesco; Beranzoli, Laura; Best, Mairi; Embriaco, Davide; Favali, Paolo; Galbraith, Nan; Heeseman, Martin; Kelly, Deborah; Pignagnoli, Luca; Pirenne, Benoit; Scofield, Oscar; Weller, Robert; Zitellini, Nevio
2015-04-01
The development of Tsunami modeling and Tsunami Early Warning Systems able to operate in near-source areas is a common need for many coastal regions like Mediterranean, Juan de Fuca/NE Pacific Coast of North America, Indian Ocean archipelagos and Japan. These regions with the important exception of Mediterranean and North East Atlantic are presently covered by Tsunami Warning Systems and Ocean Bottom Observatories, in the frame of EMSO, OOI and ONC ocean networks equipped with a varieties of sensors, using different technologies, data formats and detection procedures. A significant improvement in efficiency, cost saving and detection reliability can be achieved by exchanging technologies and data and by harmonizing sensors metadata and instrument settings. To undertake a step in this direction we propose to apply the Tsunami Detection Algorithm, which was developed in the framework of NEAREST EU project for open ocean data in near source areas and is presently used by NEMO-SN1 EMSO abyssal observatory, to the tide gauge data of Arena Cove, CA and Cordova, AK. We show the first results of the application of the algorithm.
NASA Astrophysics Data System (ADS)
FENG, Xiaojun; Gerbault, Muriel; Martin, Roland; Ganne, Jérôme; Jessell, Mark
2015-04-01
High strain zones appear to play a significant role in feeding the upper crust with fluids and partially molten material from lower crust sources. The Bole-Bulenga terrain (North-Western Ghana) is located in between two subvertical shear zones, and mainly consists of high-grade orthogneisses, paragneisses and metabasites intruded by partially molten lower crustal material with monzogranites and orthogneisses (Eburnean orogeny, around 2.1 Ga). In order to understand the location of these high grade rocks at the edges and in between these two shear zones, a three dimensional numerical model was built to test the influence of different orientations of a system of branched strike-slip faults on visco-plastic deformation, under compressional and simple shear boundary conditions. Our models indicate domains of tensile vs. compressional strain as well as shear zones, and show that not only internal fault zones but also the host rock in between the faults behave relatively softer than external regions. Under both applied compressive and simple shear boundary conditions, these softened domains constitute preferential zones of tensile strain accommodation (dilation) in the upper crust, which may favor infilling by deeper partially molten rocks. Our modeled pre-existing faults zones are assumed to have formed during an early D1 stage of deformation, and they are shown to passively migrate and rotate together with the solid matrix under applied external boundary conditions (corresponding to a post D1 - early D2 phase of deformation). We suggest that in the Bole-Bulenga terrain, fluids or partially molten material stored in deeper crustal domains, preferentially intruded the upper crust within these highly (shear and tensile) strained domains, thanks to this D2 shearing deformation phase. Building relief at the surface is primarily controlled by fault orientations, together with mechanical parameters and external boundary conditions. In particular, greatest magnitudes of relief
Frontiers in Numerical Relativity
NASA Astrophysics Data System (ADS)
Evans, Charles R.; Finn, Lee S.; Hobill, David W.
2011-06-01
Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics
Interpolation algorithms for machine tools
Burleson, R.R.
1981-08-01
There are three types of interpolation algorithms presently used in most numerical control systems: digital differential analyzer, pulse-rate multiplier, and binary-rate multiplier. A method for higher order interpolation is in the experimental stages. The trends point toward the use of high-speed micrprocessors to perform these interpolation algorithms.
Parallel algorithms for unconstrained optimizations by multisplitting
He, Qing
1994-12-31
In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.
Numerical infinities and infinitesimals in a new supercomputing framework
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.
2016-06-01
Traditional computers are able to work numerically with finite numbers only. The Infinity Computer patented recently in USA and EU gets over this limitation. In fact, it is a computational device of a new kind able to work numerically not only with finite quantities but with infinities and infinitesimals, as well. The new supercomputing methodology is not related to non-standard analysis and does not use either Cantor's infinite cardinals or ordinals. It is founded on Euclid's Common Notion 5 saying `The whole is greater than the part'. This postulate is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as numerals belonging to a positional numeral system with an infinite radix described by a specific ad hoc introduced axiom. Numerous examples of the usage of the introduced computational tools are given during the lecture. In particular, algorithms for solving optimization problems and ODEs are considered among the computational applications of the Infinity Computer. Numerical experiments executed on a software prototype of the Infinity Computer are discussed.
A quantum search algorithm for future spacecraft attitude determination
NASA Astrophysics Data System (ADS)
Tsai, Jack; Hsiao, Fu-Yuen; Li, Yi-Ju; Shen, Jen-Fu
2011-04-01
In this paper we study the potential application of a quantum search algorithm to spacecraft navigation with a focus on attitude determination. Traditionally, attitude determination is achieved by recognizing the relative position/attitude with respect to the background stars using sun sensors, earth limb sensors, or star trackers. However, due to the massive celestial database, star pattern recognition is a complicated and power consuming job. We propose a new method of attitude determination by applying the quantum search algorithm to the search for a specific star or star pattern. The quantum search algorithm, proposed by Grover in 1996, could search the specific data out of an unstructured database containing a number N of data in only O(√{N}) steps, compared to an average of N/2 steps in conventional computers. As a result, by taking advantage of matching a particular star in a vast celestial database in very few steps, we derive a new algorithm for attitude determination, collaborated with Grover's search algorithm and star catalogues of apparent magnitude and absorption spectra. Numerical simulations and examples are also provided to demonstrate the feasibility and robustness of our new algorithm.
NASA Astrophysics Data System (ADS)
Roussis, Panayiotis C.; Tsopelas, Panos C.; Constantinou, Michael C.
2010-03-01
The work presented in this paper serves as numerical verification of the analytical model developed in the companion paper for nonlinear dynamic analysis of multi-base seismically isolated structures. To this end, two numerical examples have been analyzed using the computational algorithm incorporated into program 3D-BASIS-ME-MB, developed on the basis of the newly-formulated analytical model. The first example concerns a seven-story model structure that was tested on the earthquake simulator at the University at Buffalo and was also used as a verification example for program SAP2000. The second example concerns a two-tower, multi-story structure with a split-level seismic-isolation system. For purposes of verification, key results produced by 3D-BASIS-ME-MB are compared to experimental results, or results obtained from other structural/finite element programs. In both examples, the analyzed structure is excited under conditions of bearing uplift, thus yielding a case of much interest in verifying the capabilities of the developed analysis tool.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
NASA Astrophysics Data System (ADS)
Rijkhorst, Erik-Jan
2005-12-01
The late stages of evolution of stars like our Sun are dominated by several episodes of violent mass loss. Space based observations of the resulting objects, known as Planetary Nebulae, show a bewildering array of highly symmetric shapes. The interplay between gasdynamics and radiative processes determines the morphological outcome of these objects, and numerical models for astrophysical gasdynamics have to incorporate these effects. This thesis presents new numerical techniques for carrying out high-resolution three-dimensional radiation hydrodynamical simulations. Such calculations require parallelization of computer codes, and the use of state-of-the-art supercomputer technology. Numerical models in the context of the shaping of Planetary Nebulae are presented, providing insight into their origin and fate.
Parallel Algorithm Solves Coupled Differential Equations
NASA Technical Reports Server (NTRS)
Hayashi, A.
1987-01-01
Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.
NASA Astrophysics Data System (ADS)
Wang, Youming; Chen, Xuefeng; He, Zhengjia
2011-02-01
Structural eigenvalues have been broadly applied in modal analysis, damage detection, vibration control, etc. In this paper, the interpolating multiwavelets are custom designed based on stable completion method to solve structural eigenvalue problems. The operator-orthogonality of interpolating multiwavelets gives rise to highly sparse multilevel stiffness and mass matrices of structural eigenvalue problems and permits the incremental computation of the eigenvalue solution in an efficient manner. An adaptive inverse iteration algorithm using the interpolating multiwavelets is presented to solve structural eigenvalue problems. Numerical examples validate the accuracy and efficiency of the proposed algorithm.
Optimal control of switched linear systems based on Migrant Particle Swarm Optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Fuqiang; Wang, Yongji; Zheng, Zongzhun; Li, Chuanfeng
2009-10-01
The optimal control problem for switched linear systems with internally forced switching has more constraints than with externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization (Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy of the proposed algorithm is illustrated via a numerical example.
The minimal residual QR-factorization algorithm for reliably solving subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.
Probabilistic numerics and uncertainty in computations
Hennig, Philipp; Osborne, Michael A.; Girolami, Mark
2015-01-01
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321
SToRM: A numerical model for environmental surface flows
Simoes, Francisco J.
2009-01-01
SToRM (System for Transport and River Modeling) is a numerical model developed to simulate free surface flows in complex environmental domains. It is based on the depth-averaged St. Venant equations, which are discretized using unstructured upwind finite volume methods, and contains both steady and unsteady solution techniques. This article provides a brief description of the numerical approach selected to discretize the governing equations in space and time, including important aspects of solving natural environmental flows, such as the wetting and drying algorithm. The presentation is illustrated with several application examples, covering both laboratory and natural river flow cases, which show the model’s ability to solve complex flow phenomena.
A convergent data completion algorithm using surface integral equations
NASA Astrophysics Data System (ADS)
Boukari, Yosra; Haddar, Houssem
2015-03-01
We propose and analyze a data completion algorithm based on the representation of the solution in terms of surface integral operators to solve the Cauchy problem for the Helmholtz or the Laplace equations. The proposed method is non-iterative and intrinsically handle the case of noisy and incompatible data. In order to cope with the ill-posedness of the problem, our formulation is compatible with standard regularization methods associated with linear ill posed inverse problems and leads to convergent scheme. We numerically validate our method with different synthetic examples using a Tikhonov regularization.
Multidisciplinary Optimization of Airborne Radome Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Tang, Xinggang; Zhang, Weihong; Zhu, Jihong
A multidisciplinary optimization scheme of airborne radome is proposed. The optimization procedure takes into account the structural and the electromagnetic responses simultaneously. The structural analysis is performed with the finite element method using Patran/Nastran, while the electromagnetic analysis is carried out using the Plane Wave Spectrum and Surface Integration technique. The genetic algorithm is employed for the multidisciplinary optimization process. The thicknesses of multilayer radome wall are optimized to maximize the overall transmission coefficient of the antenna-radome system under the constraint of the structural failure criteria. The proposed scheme and the optimization approach are successfully assessed with an illustrative numerical example.
Efficient computer algebra algorithms for polynomial matrices in control design
NASA Technical Reports Server (NTRS)
Baras, J. S.; Macenany, D. C.; Munach, R.
1989-01-01
The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.
NASA Astrophysics Data System (ADS)
Rahneshin, Vahid; Chierichetti, Maria
2016-09-01
In this paper, a combined numerical and experimental method, called Extended Load Confluence Algorithm, is presented to accurately predict the dynamic response of non-periodic structures when little or no information about the applied loads is available. This approach, which falls into the category of Shape Sensing methods, inputs limited experimental information acquired from sensors to a mapping algorithm that predicts the response at unmeasured locations. The proposed algorithm consists of three major cores: an experimental core for data acquisition, a numerical core based on Finite Element Method for modeling the structure, and a mapping algorithm that improves the numerical model based on a modal approach in the frequency domain. The robustness and precision of the proposed algorithm are verified through numerical and experimental examples. The results of this paper demonstrate that without a precise knowledge of the loads acting on the structure, the dynamic behavior of the system can be predicted in an effective and precise manner after just a few iterations.
NASA Technical Reports Server (NTRS)
Baker, John G.
2009-01-01
Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.
ERIC Educational Resources Information Center
Sozio, Gerry
2009-01-01
Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…
Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination
NASA Technical Reports Server (NTRS)
Ryne, Mark S.; Wang, Tseng-Chan
1991-01-01
An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.
NASA Astrophysics Data System (ADS)
Shirazi, Abolfazl
2016-10-01
This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.
Effects of shear elasticity on sea bed scattering: numerical examples.
Ivakin, A N; Jackson, D R
1998-01-01
It is known that marine sediments can support both compressional and shear waves. However, published work on scattering from irregular elastic media has not examined the influence of shear on sea bed scattering in detail. A perturbation model previously developed by the authors for joint roughness-volume scattering is used to study the effects of elasticity for three sea bed types: sedimentary rock, sand with high shear speed, and sand with "normal" shear wave speed. Both bistatic and monostatic cases are considered. For sedimentary rock it is found that shear elasticity tends to increase the importance of volume scattering and decrease the importance of roughness scattering relative to the fluid case. Shear effects are shown to be small for sands.
Numerical Polynomial Homotopy Continuation Method and String Vacua
Mehta, Dhagash
2011-01-01
Finding vmore » acua for the four-dimensional effective theories for supergravity which descend from flux compactifications and analyzing them according to their stability is one of the central problems in string phenomenology. Except for some simple toy models, it is, however, difficult to find all the vacua analytically. Recently developed algorithmic methods based on symbolic computer algebra can be of great help in the more realistic models. However, they suffer from serious algorithmic complexities and are limited to small system sizes. In this paper, we review a numerical method called the numerical polynomial homotopy continuation (NPHC) method, first used in the areas of lattice field theories, which by construction finds all of the vacua of a given potential that is known to have only isolated solutions. The NPHC method is known to suffer from no major algorithmic complexities and is embarrassingly parallelizable , and hence its applicability goes way beyond the existing symbolic methods. We first solve a simple toy model as a warm-up example to demonstrate the NPHC method at work. We then show that all the vacua of a more complicated model of a compactified M theory model, which has an S U ( 3 ) structure, can be obtained by using a desktop machine in just about an hour, a feat which was reported to be prohibitively difficult by the existing symbolic methods. Finally, we compare the various technicalities between the two methods.« less
An example-based face relighting
NASA Astrophysics Data System (ADS)
Shim, Hyunjung; Chen, Tsuhan
2012-03-01
In this paper, we propose a new face relighting algorithm powered by a large database of face images captured under various known lighting conditions (a Multi-PIE database). Key insight of our algorithm is that a face can be represented by an assemble of patches from many other faces. The algorithm finds the most similar face patches in the database in terms of the lighting and the appearance. By assembling the matched patches, we can visualize the input face under various lighting conditions. Unlike existing face relighting algorithms, we neither use any kinds of face model nor make a physical assumption. Instead, our algorithm is a data-driven approach, synthesizing the appearance of the image patch using the appearance of the example patch. Using a data-driven approach, we can account for various intrinsic facial features including the non-Lambertian skin properties as well as the hair. Also, our algorithm is insensitive to the face misalignment. We demonstrate the performance of our algorithm by face relighting and face recognition experiments. Especially, the synthesized results show that the proposed algorithm can successfully handle various intrinsic features of an input face. Also, from the face recognition experiment, we show that our method is comparable to the most recent face relighting work.
Uniformly stable backpropagation algorithm to train a feedforward neural network.
Rubio, José de Jesús; Angelov, Plamen; Pacheco, Jaime
2011-03-01
Neural networks (NNs) have numerous applications to online processes, but the problem of stability is rarely discussed. This is an extremely important issue because, if the stability of a solution is not guaranteed, the equipment that is being used can be damaged, which can also cause serious accidents. It is true that in some research papers this problem has been considered, but this concerns continuous-time NN only. At the same time, there are many systems that are better described in the discrete time domain such as population of animals, the annual expenses in an industry, the interest earned by a bank, or the prediction of the distribution of loads stored every hour in a warehouse. Therefore, it is of paramount importance to consider the stability of the discrete-time NN. This paper makes several important contributions. 1) A theorem is stated and proven which guarantees uniform stability of a general discrete-time system. 2) It is proven that the backpropagation (BP) algorithm with a new time-varying rate is uniformly stable for online identification and the identification error converges to a small zone bounded by the uncertainty. 3) It is proven that the weights' error is bounded by the initial weights' error, i.e., overfitting is eliminated in the proposed algorithm. 4) The BP algorithm is applied to predict the distribution of loads that a transelevator receives from a trailer and places in the deposits in a warehouse every hour, so that the deposits in the warehouse are reserved in advance using the prediction results. 5) The BP algorithm is compared with the recursive least square (RLS) algorithm and with the Takagi-Sugeno type fuzzy inference system in the problem of predicting the distribution of loads in a warehouse, giving that the first and the second are stable and the third is unstable. 6) The BP algorithm is compared with the RLS algorithm and with the Kalman filter algorithm in a synthetic example.
NASA Astrophysics Data System (ADS)
Agoshkov, V. I.; Lebedev, S. A.; Parmuzin, E. I.
2009-02-01
The problem of variational assimilation of satellite observational data on the ocean surface temperature is formulated and numerically investigated in order to reconstruct surface heat fluxes with the use of the global three-dimensional model of ocean hydrothermodynamics developed at the Institute of Numerical Mathematics, Russian Academy of Sciences (INM RAS), and observational data close to the data actually observed in specified time intervals. The algorithms of the numerical solution to the problem are elaborated and substantiated, and the data assimilation block is developed and incorporated into the global three-dimensional model. Numerical experiments are carried out with the use of the Indian Ocean water area as an example. The data on the ocean surface temperature over the year 2000 are used as observational data. Numerical experiments confirm the theoretical conclusions obtained and demonstrate the expediency of combining the model with a block of assimilating operational observational data on the surface temperature.
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
Tensor networks and the numerical renormalization group
NASA Astrophysics Data System (ADS)
Weichselbaum, Andreas
2012-12-01
The full-density-matrix numerical renormalization group has evolved as a systematic and transparent setting for the calculation of thermodynamical quantities at arbitrary temperatures within the numerical renormalization group (NRG) framework. It directly evaluates the relevant Lehmann representations based on the complete basis sets introduced by Anders and Schiller [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.95.196801 95, 196801 (2005)]. In addition, specific attention is given to the possible feedback from low-energy physics to high energies by the explicit and careful construction of the full thermal density matrix, naturally generated over a distribution of energy shells. Specific examples are given in terms of spectral functions (fdmNRG), time-dependent NRG (tdmNRG), Fermi-golden-rule calculations (fgrNRG) as well as the calculation of plain thermodynamic expectation values. Furthermore, based on the very fact that, by its iterative nature, the NRG eigenstates are naturally described in terms of matrix product states, the language of tensor networks has proven enormously convenient in the description of the underlying algorithmic procedures. This paper therefore also provides a detailed introduction and discussion of the prototypical NRG calculations in terms of their corresponding tensor networks.
NASA Astrophysics Data System (ADS)
Aziz, S.; Matott, L.
2012-12-01
The uncertain parameters of a given environmental model are often inferred from an optimization procedure that seeks to minimize discrepancies between simulated output and observed data. However, optimization search procedures can potentially yield different results across multiple calibration trials. For example, global search procedures like the genetic algorithm and simulated annealing are driven by inherent randomness that can result in variable inter-trial behavior. Despite this potential for variability in search algorithm performance, practitioners are reluctant to run multiple trials of an algorithm because of the added computational burden. As a result, estimated parameters are subject to an unrecognized source of uncertainty that could potentially bias or contaminate subsequent predictive analyses. In this study, a series of numerical experiments were performed to explore the influence of search algorithm uncertainty on parameter estimates. The experiments applied multiple trials of the simulated annealing algorithm to a suite of calibration problems involving watershed rainfall-runoff, groundwater flow, and subsurface contaminant transport. Results suggest that linking the simulated annealing algorithm with an adaptive range-reduction technique can significantly improve algorithm effectiveness while simultaneously reducing inter-trial variability. Therefore these range-reduction procedures appear to be a suitable mechanism for minimizing algorithm variance and improving the consistency of parameter estimates.
New knowledge-based genetic algorithm for excavator boom structural optimization
NASA Astrophysics Data System (ADS)
Hua, Haiyan; Lin, Shuwen
2014-03-01
Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.
Xie, Jiaquan; Huang, Qingxue; Yang, Xia
2016-01-01
In this paper, we are concerned with nonlinear one-dimensional fractional convection diffusion equations. An effective approach based on Chebyshev operational matrix is constructed to obtain the numerical solution of fractional convection diffusion equations with variable coefficients. The principal characteristic of the approach is the new orthogonal functions based on Chebyshev polynomials to the fractional calculus. The corresponding fractional differential operational matrix is derived. Then the matrix with the Tau method is utilized to transform the solution of this problem into the solution of a system of linear algebraic equations. By solving the linear algebraic equations, the numerical solution is obtained. The approach is tested via examples. It is shown that the proposed algorithm yields better results. Finally, error analysis shows that the algorithm is convergent.
Xie, Jiaquan; Huang, Qingxue; Yang, Xia
2016-01-01
In this paper, we are concerned with nonlinear one-dimensional fractional convection diffusion equations. An effective approach based on Chebyshev operational matrix is constructed to obtain the numerical solution of fractional convection diffusion equations with variable coefficients. The principal characteristic of the approach is the new orthogonal functions based on Chebyshev polynomials to the fractional calculus. The corresponding fractional differential operational matrix is derived. Then the matrix with the Tau method is utilized to transform the solution of this problem into the solution of a system of linear algebraic equations. By solving the linear algebraic equations, the numerical solution is obtained. The approach is tested via examples. It is shown that the proposed algorithm yields better results. Finally, error analysis shows that the algorithm is convergent. PMID:27504247
Numerical construction of the Hill functions.
NASA Technical Reports Server (NTRS)
Segethova, J.
1972-01-01
As an aid in the numerical construction of Hill functions and their derivatives, an algorithm using local coordinates and an expansion in Legendre polynomials is proposed. The algorithm is shown to possess sufficient stability, and the orthogonality of the Legendre polynomials simplifies the computation when the Ritz-Galerkin technique is used.
Dynamic evidential reasoning algorithm for systems reliability prediction
NASA Astrophysics Data System (ADS)
Hu, Chang-Hua; Si, Xiao-Sheng; Yang, Jian-Bo
2010-07-01
In this article, dynamic evidential reasoning (DER) algorithm is applied to forecast reliability in turbochargers engine systems and a reliability prediction model is developed. The focus of this study is to examine the feasibility and validity of DER algorithm in systems reliability prediction by comparing it with some existing approaches. To build an effective DER forecasting model, the parameters of prediction model must be set carefully. To solve this problem, a generic nonlinear optimisation model is investigated to search for the optimal parameters of forecasting model, and then the optimal parameters are adopted to construct the DER forecasting model. Finally, a numerical example is provided to demonstrate the detailed implementation procedures and the validity of the proposed approach in the areas of reliability prediction.
An artificial bee colony algorithm for uncertain portfolio selection.
Chen, Wei
2014-01-01
Portfolio selection is an important issue for researchers and practitioners. In this paper, under the assumption that security returns are given by experts' evaluations rather than historical data, we discuss the portfolio adjusting problem which takes transaction costs and diversification degree of portfolio into consideration. Uncertain variables are employed to describe the security returns. In the proposed mean-variance-entropy model, the uncertain mean value of the return is used to measure investment return, the uncertain variance of the return is used to measure investment risk, and the entropy is used to measure diversification degree of portfolio. In order to solve the proposed model, a modified artificial bee colony (ABC) algorithm is designed. Finally, a numerical example is given to illustrate the modelling idea and the effectiveness of the proposed algorithm. PMID:25089292
A spectral unaveraged algorithm for free electron laser simulations
Andriyash, I.A.; Lehe, R.; Malka, V.
2015-02-01
We propose and discuss a numerical method to model electromagnetic emission from the oscillating relativistic charged particles and its coherent amplification. The developed technique is well suited for free electron laser simulations, but it may also be useful for a wider range of physical problems involving resonant field–particles interactions. The algorithm integrates the unaveraged coupled equations for the particles and the electromagnetic fields in a discrete spectral domain. Using this algorithm, it is possible to perform full three-dimensional or axisymmetric simulations of short-wavelength amplification. In this paper we describe the method, its implementation, and we present examples of free electron laser simulations comparing the results with the ones provided by commonly known free electron laser codes.
An artificial bee colony algorithm for uncertain portfolio selection.
Chen, Wei
2014-01-01
Portfolio selection is an important issue for researchers and practitioners. In this paper, under the assumption that security returns are given by experts' evaluations rather than historical data, we discuss the portfolio adjusting problem which takes transaction costs and diversification degree of portfolio into consideration. Uncertain variables are employed to describe the security returns. In the proposed mean-variance-entropy model, the uncertain mean value of the return is used to measure investment return, the uncertain variance of the return is used to measure investment risk, and the entropy is used to measure diversification degree of portfolio. In order to solve the proposed model, a modified artificial bee colony (ABC) algorithm is designed. Finally, a numerical example is given to illustrate the modelling idea and the effectiveness of the proposed algorithm.
An Artificial Bee Colony Algorithm for Uncertain Portfolio Selection
Chen, Wei
2014-01-01
Portfolio selection is an important issue for researchers and practitioners. In this paper, under the assumption that security returns are given by experts' evaluations rather than historical data, we discuss the portfolio adjusting problem which takes transaction costs and diversification degree of portfolio into consideration. Uncertain variables are employed to describe the security returns. In the proposed mean-variance-entropy model, the uncertain mean value of the return is used to measure investment return, the uncertain variance of the return is used to measure investment risk, and the entropy is used to measure diversification degree of portfolio. In order to solve the proposed model, a modified artificial bee colony (ABC) algorithm is designed. Finally, a numerical example is given to illustrate the modelling idea and the effectiveness of the proposed algorithm. PMID:25089292
A numerical model including PID control of a multizone crystal growth furnace
NASA Astrophysics Data System (ADS)
Panzarella, Charles H.; Kassemi, Mohammad
This paper presents a 2D axisymmetric combined conduction and radiation model of a multizone crystal growth furnace. The model is based on a programmable multizone furnace (PMZF) designed and built at NASA Lewis Research Center for growing high quality semiconductor crystals. A novel feature of this model is a control algorithm which automatically adjusts the power in any number of independently controlled heaters to establish the desired crystal temperatures in the furnace model. The control algorithm eliminates the need for numerous trial and error runs previously required to obtain the same results. The finite element code, FIDAP, used to develop the furnace model, was modified to directly incorporate the control algorithm. This algorithm, which presently uses PID control, and the associated heat transfer model are briefly discussed. Together, they have been used to predict the heater power distributions for a variety of furnace configurations and desired temperature profiles. Examples are included to demonstrate the effectiveness of the PID controlled model in establishing isothermal, Bridgman, and other complicated temperature profies in the sample. Finally, an example is given to show how the algorithm can be used to change the desired profile with time according to a prescribed temperature-time evolution.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Griffiths, D. F.
1991-01-01
Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Griffiths, D. F.
1990-01-01
Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.
Parallel algorithms for matrix computations
Plemmons, R.J.
1990-01-01
The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.
PSLQ: An Algorithm to Discover Integer Relations
Bailey, David H.; Borwein, J. M.
2009-04-03
Let x = (x{sub 1}, x{sub 2} {hor_ellipsis}, x{sub n}) be a vector of real or complex numbers. x is said to possess an integer relation if there exist integers a{sub i}, not all zero, such that a{sub 1}x{sub 1} + a{sub 2}x{sub 2} + {hor_ellipsis} + a{sub n}x{sub n} = 0. By an integer relation algorithm, we mean a practical computational scheme that can recover the vector of integers ai, if it exists, or can produce bounds within which no integer relation exists. As we will see in the examples below, an integer relation algorithm can be used to recognize a computed constant in terms of a formula involving known constants, or to discover an underlying relation between quantities that can be computed to high precision. At the present time, the most effective algorithm for integer relation detection is the 'PSLQ' algorithm of mathematician-sculptor Helaman Ferguson [10, 4]. Some efficient 'multi-level' implementations of PSLQ, as well as a variant of PSLQ that is well-suited for highly parallel computer systems, are given in [4]. PSLQ constructs a sequence of integer-valued matrices B{sub n} that reduces the vector y = xB{sub n}, until either the relation is found (as one of the columns of B{sub n}), or else precision is exhausted. At the same time, PSLQ generates a steadily growing bound on the size of any possible relation. When a relation is found, the size of smallest entry of the vector y abruptly drops to roughly 'epsilon' (i.e. 10{sup -p}, where p is the number of digits of precision). The size of this drop can be viewed as a 'confidence level' that the relation is real and not merely a numerical artifact - a drop of 20 or more orders of magnitude almost always indicates a real relation. Very high precision arithmetic must be used in PSLQ. If one wishes to recover a relation of length n, with coefficients of maximum size d digits, then the input vector x must be specified to at least nd digits, and one must employ nd-digit floating-point arithmetic. Maple and
A simple suboptimal least-squares algorithm for attitude determination with multiple sensors
NASA Technical Reports Server (NTRS)
Brozenec, Thomas F.; Bender, Douglas J.
1994-01-01
Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is
Algorithms for propagating uncertainty across heterogeneous domains
Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.
2015-12-30
We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.
An overview of SuperLU: Algorithms, implementation, and userinterface
Li, Xiaoye S.
2003-09-30
We give an overview of the algorithms, design philosophy,and implementation techniques in the software SuperLU, for solving sparseunsymmetric linear systems. In particular, we highlight the differencesbetween the sequential SuperLU (including its multithreaded extension)and parallel SuperLU_DIST. These include the numerical pivoting strategy,the ordering strategy for preserving sparsity, the ordering in which theupdating tasks are performed, the numerical kernel, and theparallelization strategy. Because of the scalability concern, theparallel code is drastically different from the sequential one. Wedescribe the user interfaces ofthe libraries, and illustrate how to usethe libraries most efficiently depending on some matrix characteristics.Finally, we give some examples of how the solver has been used inlarge-scale scientific applications, and the performance.
Conditional Convergence of Numerical Series
ERIC Educational Resources Information Center
Gomez, E.; Plaza, A.
2002-01-01
One of the most astonishing properties when studying numerical series is that the sum is not commutative, that is the sum may change when the order of its elements is altered. In this note an example is given of such a series. A well-known mathematical proof is given and a MATLAB[C] program used for different rearrangements of the series…
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
Numerical wave propagation in ImageJ.
Piedrahita-Quintero, Pablo; Castañeda, Raul; Garcia-Sucerquia, Jorge
2015-07-20
An ImageJ plugin for numerical wave propagation is presented. The plugin provides ImageJ, the well-known software for image processing, with the capability of computing numerical wave propagation by the use of angular spectrum, Fresnel, and Fresnel-Bluestein algorithms. The plugin enables numerical wave propagation within the robust environment provided by the complete set of built-in tools for image processing available in ImageJ. The plugin can be used for teaching and research purposes. We illustrate its use to numerically recreate Poisson's spot and Babinet's principle, and in the numerical reconstruction of digitally recorded holograms from millimeter-sized and pure phase microscopic objects.
Seismic-acoustic finite-difference wave propagation algorithm.
Preston, Leiph; Aldridge, David Franklin
2010-10-01
An efficient numerical algorithm for treating earth models composed of fluid and solid portions is obtained via straightforward modifications to a 3D time-domain finite-difference algorithm for simulating isotropic elastic wave propagation.
Spurious Numerical Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1995-01-01
Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.
Robust algorithms for solving stochastic partial differential equations
Werner, M.J.; Drummond, P.D.
1997-04-01
A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in X{sup 2} parametric waveguides. This example uses non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used will be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. 27 refs., 4 figs.
Learning about Functions through Learner-Generated Examples
ERIC Educational Resources Information Center
Dinkelman, Martha O.; Cavey, Laurie O.
2015-01-01
In many mathematics classrooms, the teacher provides "worked examples" to demonstrate how students should perform certain algorithms or processes. Some students find it difficult to generalize from the examples that teachers provide and cannot apply what they have learned in new situations (Watson and Mason 2002). Instead, teachers might…
Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.
2000-01-01
We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469
Model reduction algorithms for optimal control and importance sampling of diffusions
NASA Astrophysics Data System (ADS)
Hartmann, Carsten; Schütte, Christof; Zhang, Wei
2016-08-01
We propose numerical algorithms for solving optimal control and importance sampling problems based on simplified models. The algorithms combine model reduction techniques for multiscale diffusions and stochastic optimization tools, with the aim of reducing the original, possibly high-dimensional problem to a lower dimensional representation of the dynamics, in which only a few relevant degrees of freedom are controlled or biased. Specifically, we study situations in which either a reaction coordinate onto which the dynamics can be projected is known, or situations in which the dynamics shows strongly localized behavior in the small noise regime. No explicit assumptions about small parameters or scale separation have to be made. We illustrate the approach with simple, but paradigmatic numerical examples.
NASA Astrophysics Data System (ADS)
Zhong, Wei; Su, Ruiyi; Gui, Liangjin; Fan, Zijie
2016-06-01
This article proposes a method called the cooperative coevolutionary genetic algorithm with independent ground structures (CCGA-IGS) for the simultaneous topology and sizing optimization of discrete structures. An IGS strategy is proposed to enhance the flexibility of the optimization by offering two separate design spaces and to improve the efficiency of the algorithm by reducing the search space. The CCGA is introduced to divide a complex problem into two smaller subspaces: the topological and sizing variables are assigned into two subpopulations which evolve in isolation but collaborate in fitness evaluations. Five different methods were implemented on 2D and 3D numeric examples to test the performance of the algorithms. The results demonstrate that the performance of the algorithms is improved in terms of accuracy and convergence speed with the IGS strategy, and the CCGA converges faster than the traditional GA without loss of accuracy.
A Simple Two Aircraft Conflict Resolution Algorithm
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.
1999-01-01
Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in the cockpit, dispatchers in operation control centers and air traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control imctions.This paper describes a conflict detection and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection and resolution method.
A Simple Two Aircraft Conflict Resolution Algorithm
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.
2006-01-01
Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in, the cockpit, dispatchers in operation control centers sad and traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control functions. This paper describes a conflict detection, and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm, which is often used for missile guidance during the terminal phase. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection, and the conflict resolution methods.
An Algorithm for the Mixed Transportation Network Design Problem.
Liu, Xinyu; Chen, Qun
2016-01-01
This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately.
A convergent hybrid decomposition algorithm model for SVM training.
Lucidi, Stefano; Palagi, Laura; Risi, Arnaldo; Sciandrone, Marco
2009-06-01
Training of support vector machines (SVMs) requires to solve a linearly constrained convex quadratic problem. In real applications, the number of training data may be very huge and the Hessian matrix cannot be stored. In order to take into account this issue, a common strategy consists in using decomposition algorithms which at each iteration operate only on a small subset of variables, usually referred to as the working set. Training time can be significantly reduced by using a caching technique that allocates some memory space to store the columns of the Hessian matrix corresponding to the variables recently updated. The convergence properties of a decomposition method can be guaranteed by means of a suitable selection of the working set and this can limit the possibility of exploiting the information stored in the cache. We propose a general hybrid algorithm model which combines the capability of producing a globally convergent sequence of points with a flexible use of the information in the cache. As an example of a specific realization of the general hybrid model, we describe an algorithm based on a particular strategy for exploiting the information deriving from a caching technique. We report the results of computational experiments performed by simple implementations of this algorithm. The numerical results point out the potentiality of the approach.
Locomotive assignment problem with train precedence using genetic algorithm
NASA Astrophysics Data System (ADS)
Noori, Siamak; Ghannadpour, Seyed Farid
2012-07-01
This paper aims to study the locomotive assignment problem which is very important for railway companies, in view of high cost of operating locomotives. This problem is to determine the minimum cost assignment of homogeneous locomotives located in some central depots to a set of pre-scheduled trains in order to provide sufficient power to pull the trains from their origins to their destinations. These trains have different degrees of priority for servicing, and the high class of trains should be serviced earlier than others. This problem is modeled using vehicle routing and scheduling problem where trains representing the customers are supposed to be serviced in pre-specified hard/soft fuzzy time windows. A two-phase approach is used which, in the first phase, the multi-depot locomotive assignment is converted to a set of single depot problems, and after that, each single depot problem is solved heuristically by a hybrid genetic algorithm. In the genetic algorithm, various heuristics and efficient operators are used in the evolutionary search. The suggested algorithm is applied to solve the medium sized numerical example to check capabilities of the model and algorithm. Moreover, some of the results are compared with those solutions produced by branch-and-bound technique to determine validity and quality of the model. Results show that suggested approach is rather effective in respect of quality and time.
An Algorithm for the Mixed Transportation Network Design Problem
Liu, Xinyu; Chen, Qun
2016-01-01
This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803
An Algorithm for the Mixed Transportation Network Design Problem.
Liu, Xinyu; Chen, Qun
2016-01-01
This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803
Comprehensive eye evaluation algorithm
NASA Astrophysics Data System (ADS)
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas
Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q
2007-04-18
A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.
Tomasz Plawski, J. Hovater
2010-09-01
A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.
NASA Astrophysics Data System (ADS)
Vaucouleur, Sebastien
2011-02-01
We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.
Cuba: Multidimensional numerical integration library
NASA Astrophysics Data System (ADS)
Hahn, Thomas
2016-08-01
The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.
Dynamical Approach Study of Spurious Numerics in Nonlinear Computations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Mansour, Nagi (Technical Monitor)
2002-01-01
The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.
Parallel algorithm development
Adams, T.F.
1996-06-01
Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.
Numerical modeling of the gas lift process in gas lift wells
NASA Astrophysics Data System (ADS)
Temirbekov, N. M.; Turarov, A. K.; Baigereyev, D. R.
2016-06-01
In this paper, one-dimensional and two-dimensional axisymmetric motion of gas, liquid and a gas-liquid mixture in a gas-lift well is studied. Numerical simulation of the one-dimensional model of gas-lift process is considered where the movement in a gas-lift well is described by partial differential equations of hyperbolic type. Difference schemes for the gas-lift model of the process are developed on a nonuniform grid condensing in subdomains with big gradients of the solution. The results of the proposed algorithm are illustrated on the example of a real well.
NASA Astrophysics Data System (ADS)
Shapiro, A.; Fedorovich, E.; Gibbs, J. A.
2015-03-01
An analytical solution of the Boussinesq equations for the motion of a viscous stably stratified fluid driven by a surface thermal forcing with large horizontal gradients (step changes) is obtained. The solution can be used to verify that computer codes for Boussinesq fluid system simulations are free of errors in formulation of wall boundary conditions, and to evaluate the relative performances of competing numerical algorithms. Because the solution pertains to flows driven by a surface thermal forcing, one of its main applications may be for testing the no-slip, impermeable wall boundary conditions for the pressure Poisson equation. Examples of such tests are presented.
New formulations of monotonically convergent quantum control algorithms
NASA Astrophysics Data System (ADS)
Maday, Yvon; Turinici, Gabriel
2003-05-01
Most of the numerical simulation in quantum (bilinear) control have used one of the monotonically convergent algorithms of Krotov (introduced by Tannor et al.) or of Zhu and Rabitz. However, until now no explicit relationship has been revealed between the two algorithms in order to understand their common properties. Within this framework, we propose in this paper a unified formulation that comprises both algorithms and that extends to a new class of monotonically convergent algorithms. Numerical results show that the newly derived algorithms behave as well as (and sometimes better than) the well-known algorithms cited above.
NASA Technical Reports Server (NTRS)
Vardi, A.
1984-01-01
The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.
A Unifying Probability Example.
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.
2002-01-01
Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…
Gaining Algorithmic Insight through Simplifying Constraints.
ERIC Educational Resources Information Center
Ginat, David
2002-01-01
Discusses algorithmic problem solving in computer science education, particularly algorithmic insight, and focuses on the relevance and effectiveness of the heuristic simplifying constraints which involves simplification of a given problem to a problem in which constraints are imposed on the input data. Presents three examples involving…
Retrieval, Analysis, and Display of Numeric Data.
ERIC Educational Resources Information Center
Berger, Mary C.; Wanger, Judith
1982-01-01
This introduction to online numeric database systems describes the types of databases associated with such systems, shows the major functions which they can perform (retrieval, analysis, display), and identifies the major characteristics of user interfaces. Examples of numeric database use are appended. (EJS)
Nonclassicality thresholds for multiqubit states: Numerical analysis
Gruca, Jacek; Zukowski, Marek; Laskowski, Wieslaw; Kiesel, Nikolai; Wieczorek, Witlef; Weinfurter, Harald; Schmid, Christian
2010-07-15
States that strongly violate Bell's inequalities are required in many quantum-informational protocols as, for example, in cryptography, secret sharing, and the reduction of communication complexity. We investigate families of such states with a numerical method which allows us to reveal nonclassicality even without direct knowledge of Bell's inequalities for the given problem. An extensive set of numerical results is presented and discussed.
Derivative Free Gradient Projection Algorithms for Rotation
ERIC Educational Resources Information Center
Jennrich, Robert I.
2004-01-01
A simple modification substantially simplifies the use of the gradient projection (GP) rotation algorithms of Jennrich (2001, 2002). These algorithms require subroutines to compute the value and gradient of any specific rotation criterion of interest. The gradient can be difficult to derive and program. It is shown that using numerical gradients…
Tutorial examples for uncertainty quantification methods.
De Bord, Sarah
2015-08-01
This report details the work accomplished during my 2015 SULI summer internship at Sandia National Laboratories in Livermore, CA. During this internship, I worked on multiple tasks with the common goal of making uncertainty quantification (UQ) methods more accessible to the general scientific community. As part of my work, I created a comprehensive numerical integration example to incorporate into the user manual of a UQ software package. Further, I developed examples involving heat transfer through a window to incorporate into tutorial lectures that serve as an introduction to UQ methods.
Bahşı, Ayşe Kurt; Yalçınbaş, Salih
2016-01-01
In this study, the Fibonacci collocation method based on the Fibonacci polynomials are presented to solve for the fractional diffusion equations with variable coefficients. The fractional derivatives are described in the Caputo sense. This method is derived by expanding the approximate solution with Fibonacci polynomials. Using this method of the fractional derivative this equation can be reduced to a set of linear algebraic equations. Also, an error estimation algorithm which is based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation algorithm. If the exact solution of the problem is not known, the absolute error function of the problems can be approximately computed by using the Fibonacci polynomial solution. By using this error estimation function, we can find improved solutions which are more efficient than direct numerical solutions. Numerical examples, figures, tables are comparisons have been presented to show efficiency and usable of proposed method. PMID:27610294
NASA Astrophysics Data System (ADS)
Steinert, Bastian; Perscheid, Michael; Beck, Martin; Lincke, Jens; Hirschfeld, Robert
Enhancing and maintaining a complex software system requires detailed understanding of the underlying source code. Gaining this understanding by reading source code is difficult. Since software systems are inherently dynamic, it is complex and time consuming to imagine, for example, the effects of a method’s source code at run-time. The inspection of software systems during execution, as encouraged by debugging tools, contributes to source code comprehension. Leveraged by test cases as entry points, we want to make it easy for developers to experience selected execution paths in their code by debugging into examples. We show how links between test cases and application code can be established by means of dynamic analysis while executing regular tests.
NASA Astrophysics Data System (ADS)
Krasnikov, S. D.; Kuznetsov, E. B.
2016-09-01
Numerical continuation of solution through certain singular points of the curve of the set of solutions to a system of nonlinear algebraic or transcendental equations with a parameter is considered. Bifurcation points of codimension two and three are investigated. Algorithms and computer programs are developed that implement the procedure of discrete parametric continuation of the solution and find all branches at simple bifurcation points of codimension two and three. Corresponding theorems are proved, and each algorithm is rigorously justified. A novel algorithm for the estimation of errors of tangential vectors at simple bifurcation points of a finite codimension m is proposed. The operation of the computer programs is demonstrated by test examples, which allows one to estimate their efficiency and confirm the theoretical results.
Development and application of unified algorithms for problems in computational science
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Chakravarthy, Sukumar
1987-01-01
A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.
LC-Grid: a linear global contact search algorithm for finite element analysis
NASA Astrophysics Data System (ADS)
Chen, Hu; Lei, Zhou; Zang, Mengyan
2014-11-01
The contact searching is computationally intensive and its memory requirement is highly demanding; therefore, it is significant to develop an efficient contact search algorithm with less memory required. In this paper, we propose an efficient global contact search algorithm with linear complexity in terms of computational cost and memory requirement for the finite element analysis of contact problems. This algorithm is named LC-Grid (Lei devised the algorithm and Chen implemented it). The contact space is decomposed; thereafter, all contact nodes and segments are firstly mapped onto layers, then onto rows and lastly onto cells. In each mapping level, the linked-list technique is used for the efficient storing and retrieval of contact nodes and segments. The contact detection is performed in each non-empty cell along non-empty rows in each non-empty layer, and moves to the next non-empty layer once a layer is completed. The use of migration strategy makes the algorithm insensitive to mesh size. The properties of this algorithm are investigated and numerically verified to be linearly proportional to the number of contact segments. Besides, the ideal ranges of two significant scale factors of cell size and buffer zone which strongly affect computational efficiency are determined via an illustrative example.
Numerical Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Naiman, Cynthia
2006-01-01
The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-02-28
We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
NASA Astrophysics Data System (ADS)
Henderson, Michael
1997-08-01
The Numerical Analysis Objects project (NAO) is a project in the Mathematics Department of IBM's TJ Watson Research Center. While there are plenty of numerical tools available today, it is not an easy task to combine them into a custom application. NAO is directed at the dual problems of building applications from a set of tools, and creating those tools. There are several "reuse" projects, which focus on the problems of identifying and cataloging tools. NAO is directed at the specific context of scientific computing. Because the type of tools is restricted, problems such as tools with incompatible data structures for input and output, and dissimilar interfaces to tools which solve similar problems can be addressed. The approach we've taken is to define interfaces to those objects used in numerical analysis, such as geometries, functions and operators, and to start collecting (and building) a set of tools which use these interfaces. We have written a class library (a set of abstract classes and implementations) in C++ which demonstrates the approach. Besides the classes, the class library includes "stub" routines which allow the library to be used from C or Fortran, and an interface to a Visual Programming Language. The library has been used to build a simulator for petroleum reservoirs, using a set of tools for discretizing nonlinear differential equations that we have written, and includes "wrapped" versions of packages from the Netlib repository. Documentation can be found on the Web at "http://www.research.ibm.com/nao". I will describe the objects and their interfaces, and give examples ranging from mesh generation to solving differential equations.
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Reliable numerical computation in an optimal output-feedback design
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1991-01-01
A reliable algorithm is presented for the evaluation of a quadratic performance index and its gradients with respect to the controller design parameters. The algorithm is a part of a design algorithm for optimal linear dynamic output-feedback controller that minimizes a finite-time quadratic performance index. The numerical scheme is particularly robust when it is applied to the control-law synthesis for systems with densely packed modes and where there is a high likelihood of encountering degeneracies in the closed-loop eigensystem. This approach through the use of an accurate Pade series approximation does not require the closed-loop system matrix to be diagonalizable. The algorithm was included in a control design package for optimal robust low-order controllers. Usefulness of the proposed numerical algorithm was demonstrated using numerous practical design cases where degeneracies occur frequently in the closed-loop system under an arbitrary controller design initialization and during the numerical search.
Numerical taxonomy on data: Experimental results
Cohen, J.; Farach, M.
1997-12-01
The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.
Magnetic Resonance Image Example Based Contrast Synthesis
Roy, Snehashis; Carass, Aaron; Prince, Jerry L.
2013-01-01
The performance of image analysis algorithms applied to magnetic resonance images is strongly influenced by the pulse sequences used to acquire the images. Algorithms are typically optimized for a targeted tissue contrast obtained from a particular implementation of a pulse sequence on a specific scanner. There are many practical situations, including multi-institution trials, rapid emergency scans, and scientific use of historical data, where the images are not acquired according to an optimal protocol or the desired tissue contrast is entirely missing. This paper introduces an image restoration technique that recovers images with both the desired tissue contrast and a normalized intensity profile. This is done using patches in the acquired images and an atlas containing patches of the acquired and desired tissue contrasts. The method is an example-based approach relying on sparse reconstruction from image patches. Its performance in demonstrated using several examples, including image intensity normalization, missing tissue contrast recovery, automatic segmentation, and multimodal registration. These examples demonstrate potential practical uses and also illustrate limitations of our approach. PMID:24058022
Propagation of numerical noise in particle-in-cell tracking
NASA Astrophysics Data System (ADS)
Kesting, Frederik; Franchetti, Giuliano
2015-11-01
Particle-in-cell (PIC) is the most used algorithm to perform self-consistent tracking of intense charged particle beams. It is based on depositing macroparticles on a grid, and subsequently solving on it the Poisson equation. It is well known that PIC algorithms occupy intrinsic limitations as they introduce numerical noise. Although not significant for short-term tracking, this becomes important in simulations for circular machines over millions of turns as it may induce artificial diffusion of the beam. In this work, we present a modeling of numerical noise induced by PIC algorithms, and discuss its influence on particle dynamics. The combined effect of particle tracking and noise created by PIC algorithms leads to correlated or decorrelated numerical noise. For decorrelated numerical noise we derive a scaling law for the simulation parameters, allowing an estimate of artificial emittance growth. Lastly, the effect of correlated numerical noise is discussed, and a mitigation strategy is proposed.
Intelligent perturbation algorithms for space scheduling optimization
NASA Technical Reports Server (NTRS)
Kurtzman, Clifford R.
1991-01-01
Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.
A generalized memory test algorithm
NASA Technical Reports Server (NTRS)
Milner, E. J.
1982-01-01
A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.
Dikin-type algorithms for dextrous grasping force optimization
Buss, M.; Faybusovich, L.; Moore, J.B.
1998-08-01
One of the central issues in dextrous robotic hand grasping is to balance external forces acting on the object and at the same time achieve grasp stability and minimum grasping effort. A companion paper shows that the nonlinear friction-force limit constraints on grasping forces are equivalent to the positive definiteness of a certain matrix subject to linear constraints. Further, compensation of the external object force is also a linear constraint on this matrix. Consequently, the task of grasping force optimization can be formulated as a problem with semidefinite constraints. In this paper, two versions of strictly convex cost functions, one of them self-concordant, are considered. These are twice-continuously differentiable functions that tend to infinity at the boundary of possible definiteness. For the general class of such cost functions, Dikin-type algorithms are presented. It is shown that the proposed algorithms guarantee convergence to the unique solution of the semidefinite programming problem associated with dextrous grasping force optimization. Numerical examples demonstrate the simplicity of implementation, the good numerical properties, and the optimality of the approach.
Compression algorithm for multideterminant wave functions.
Weerasinghe, Gihan L; Ríos, Pablo López; Needs, Richard J
2014-02-01
A compression algorithm is introduced for multideterminant wave functions which can greatly reduce the number of determinants that need to be evaluated in quantum Monte Carlo calculations. We have devised an algorithm with three levels of compression, the least costly of which yields excellent results in polynomial time. We demonstrate the usefulness of the compression algorithm for evaluating multideterminant wave functions in quantum Monte Carlo calculations, whose computational cost is reduced by factors of between about 2 and over 25 for the examples studied. We have found evidence of sublinear scaling of quantum Monte Carlo calculations with the number of determinants when the compression algorithm is used.
Algorithm to search for genomic rearrangements
NASA Astrophysics Data System (ADS)
Nałecz-Charkiewicz, Katarzyna; Nowak, Robert
2013-10-01
The aim of this article is to discuss the issue of comparing nucleotide sequences in order to detect chromosomal rearrangements (for example, in the study of genomes of two cucumber varieties, Polish and Chinese). Two basic algorithms for detecting rearrangements has been described: Smith-Waterman algorithm, as well as a new method of searching genetic markers in combination with Knuth-Morris-Pratt algorithm. The computer program in client-server architecture was developed. The algorithms properties were examined on genomes Escherichia coli and Arabidopsis thaliana genomes, and are prepared to compare two cucumber varieties, Polish and Chinese. The results are promising and further works are planned.
Thermostat algorithm for generating target ensembles.
Bravetti, A; Tapias, D
2016-02-01
We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator. PMID:26986320
Thermostat algorithm for generating target ensembles.
Bravetti, A; Tapias, D
2016-02-01
We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.
Thermostat algorithm for generating target ensembles
NASA Astrophysics Data System (ADS)
Bravetti, A.; Tapias, D.
2016-02-01
We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.
NASA Astrophysics Data System (ADS)
Radev, Dimitar; Lokshina, Izabella
2010-11-01
The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.
A parallel algorithm for thermo-hydro-mechanical analysis of deforming porous media
NASA Astrophysics Data System (ADS)
Wang, X.; Gawin, D.; Schrefler, B. A.
1996-11-01
In this paper, a parallel Newton-Raphson algorithm with domain decomposition is developed to solve fully coupled heat, water and gas flow in deformable porous media. The model makes use of the modified effective stress concept together with the capillary pressure relationship. Phase change and latent heat transfer are also taken into account. The chosen macroscopic field variables are displacement, capillary pressure, gas pressure and temperature. The parallel program is developed on a cluster of workstations. The PVM (Parallel Virtual Machine) system is used to handle communications among networked workstations. An implementation of this parallel method on workstations is discussed, the speedup and efficiency of this method being demonstrated by numerical examples.
An efficient parallel algorithm for three-dimensional analysis of subsidence above gas reservoirs
NASA Astrophysics Data System (ADS)
Schrefler, B. A.; Wang, X.; Salomoni, V. A.; Zuccolo, G.
1999-09-01
In this paper an efficient parallel algorithm to solve a three-dimensional problem of subsidence above exploited gas reservoirs is presented. The parallel program is developed on a cluster of workstations. The parallel virtual machine (PVM) system is used to handle communications among networked workstations. The method has advantages such as numbering of the finite element mesh in an arbitrary manner, simple programming organization, smaller core requirements and computation times. An implementation of this parallel method on workstations is discussed, the speed-up and efficiency of this method being demonstrated by a numerical example. Copyright
Flux-split algorithms for flows with non-equilibrium chemistry and vibrational relaxation
NASA Technical Reports Server (NTRS)
Grossman, B.; Cinnella, P.
1990-01-01
The present consideration of numerical computation methods for gas flows with nonequilibrium chemistry thermodynamics gives attention to an equilibrium model, a general nonequilibrium model, and a simplified model based on vibrational relaxation. Flux-splitting procedures are developed for the fully-coupled inviscid equations encompassing fluid dynamics and both chemical and internal energy-relaxation processes. A fully coupled and implicit large-block structure is presented which embodies novel forms of flux-vector split and flux-difference split algorithms valid for nonequilibrium flow; illustrative high-temperature shock tube and nozzle flow examples are given.
In Praise of Numerical Computation
NASA Astrophysics Data System (ADS)
Yap, Chee K.
Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
Numerical vorticity creation based on impulse conservation.
Summers, D M; Chorin, A J
1996-01-01
The problem of creating solenoidal vortex elements to satisfy no-slip boundary conditions in Lagrangian numerical vortex methods is solved through the use of impulse elements at walls and their subsequent conversion to vortex loops. The algorithm is not uniquely defined, due to the gauge freedom in the definition of impulse; the numerically optimal choice of gauge remains to be determined. Two different choices are discussed, and an application to flow past a sphere is sketched. PMID:11607636
An algorithm for the automatic synchronization of Omega receivers
NASA Technical Reports Server (NTRS)
Stonestreet, W. M.; Marzetta, T. L.
1977-01-01
The Omega navigation system and the requirement for receiver synchronization are discussed. A description of the synchronization algorithm is provided. The numerical simulation and its associated assumptions were examined and results of the simulation are presented. The suggested form of the synchronization algorithm and the suggested receiver design values were surveyed. A Fortran of the synchronization algorithm used in the simulation was also included.
Nonlinear dynamics and numerical uncertainties in CFD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1996-01-01
The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.
NASA Astrophysics Data System (ADS)
Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan
In the previous chapters we introduced the basic concepts of MOF-based languages for developing ontologies, such as the Ontology Definition Metamodel (ODM) and the Ontology UML Profile (OUP). We also discussed mappings between those languages and the OWL language. The purpose of this chapter is to illustrate the use of MOF-based languages for developing real-world ontologies. Here we discuss two different ontologies that we developed in different domains. The first example is a Petri net ontology that formalizes the representation of Petri nets, a well-known tool for modeling, simulation, and analysis of systems and processes. This Petri net ontology overcomes the syntactic constraints of the present XMLbased standard for sharing Petri net models, namely Petri Net Markup Language.
NASA Technical Reports Server (NTRS)
Reed, K. W.; Atluri, S. N.
1983-01-01
A new hybrid-stress finite element algorithm, suitable for analyses of large, quasistatic, inelastic deformations, is presented. The algorithm is base upon a generalization of de Veubeke's complementary energy principle. The principal variables in the formulation are the nominal stress rate and spin, and thg resulting finite element equations are discrete versions of the equations of compatibility and angular momentum balance. The algorithm produces true rates, time derivatives, as opposed to 'increments'. There results a complete separation of the boundary value problem (for stress rate and velocity) and the initial value problem (for total stress and deformation); hence, their numerical treatments are essentially independent. After a fairly comprehensive discussion of the numerical treatment of the boundary value problem, we launch into a detailed examination of the numerical treatment of the initial value problem, covering the topics of efficiency, stability and objectivity. The paper is closed with a set of examples, finite homogeneous deformation problems, which serve to bring out important aspects of the algorithm.
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Rankin, Charles C.
2006-01-01
This document summarizes the STructural Analysis of General Shells (STAGS) development effort, STAGS performance for selected demonstration problems, and STAGS application problems illustrating selected advanced features available in the STAGS Version 5.0. Each problem is discussed including selected background information and reference solutions when available. The modeling and solution approach for each problem is described and illustrated. Numerical results are presented and compared with reference solutions, test data, and/or results obtained from mesh refinement studies. These solutions provide an indication of the overall capabilities of the STAGS nonlinear finite element analysis tool and provide users with representative cases, including input files, to explore these capabilities that may then be tailored to other applications.
Estimation of region of attraction for polynomial nonlinear systems: a numerical method.
Khodadadi, Larissa; Samadi, Behzad; Khaloozadeh, Hamid
2014-01-01
This paper introduces a numerical method to estimate the region of attraction for polynomial nonlinear systems using sum of squares programming. This method computes a local Lyapunov function and an invariant set around a locally asymptotically stable equilibrium point. The invariant set is an estimation of the region of attraction for the equilibrium point. In order to enlarge the estimation, a subset of the invariant set defined by a shape factor is enlarged by solving a sum of squares optimization problem. In this paper, a new algorithm is proposed to select the shape factor based on the linearized dynamic model of the system. The shape factor is updated in each iteration using the computed local Lyapunov function from the previous iteration. The efficiency of the proposed method is shown by a few numerical examples.
Numerical solution of stochastic differential equations with Poisson and Lévy white noise
NASA Astrophysics Data System (ADS)
Grigoriu, M.
2009-08-01
A fixed time step method is developed for integrating stochastic differential equations (SDE’s) with Poisson white noise (PWN) and Lévy white noise (LWN). The method for integrating SDE’s with PWN has the same structure as that proposed by Kim [Phys. Rev. E 76, 011109 (2007)], but is established by using different arguments. The integration of SDE’s with LWN is based on a representation of Lévy processes by sums of scaled Brownian motions and compound Poisson processes. It is shown that the numerical solutions of SDE’s with PWN and LWN converge weakly to the exact solutions of these equations, so that they can be used to estimate not only marginal properties but also distributions of functionals of the exact solutions. Numerical examples are used to demonstrate the applications and the accuracy of the proposed integration algorithms.
Numerical solution of stochastic differential equations with Poisson and Lévy white noise.
Grigoriu, M
2009-08-01
A fixed time step method is developed for integrating stochastic differential equations (SDE's) with Poisson white noise (PWN) and Lévy white noise (LWN). The method for integrating SDE's with PWN has the same structure as that proposed by Kim [Phys. Rev. E 76, 011109 (2007)], but is established by using different arguments. The integration of SDE's with LWN is based on a representation of Lévy processes by sums of scaled Brownian motions and compound Poisson processes. It is shown that the numerical solutions of SDE's with PWN and LWN converge weakly to the exact solutions of these equations, so that they can be used to estimate not only marginal properties but also distributions of functionals of the exact solutions. Numerical examples are used to demonstrate the applications and the accuracy of the proposed integration algorithms.
Adaptive Routing Algorithm in Wireless Communication Networks Using Evolutionary Algorithm
NASA Astrophysics Data System (ADS)
Yan, Xuesong; Wu, Qinghua; Cai, Zhihua
At present, mobile communications traffic routing designs are complicated because there are more systems inter-connecting to one another. For example, Mobile Communication in the wireless communication networks has two routing design conditions to consider, i.e. the circuit switching and the packet switching. The problem in the Packet Switching routing design is its use of high-speed transmission link and its dynamic routing nature. In this paper, Evolutionary Algorithms is used to determine the best solution and the shortest communication paths. We developed a Genetic Optimization Process that can help network planners solving the best solutions or the best paths of routing table in wireless communication networks are easily and quickly. From the experiment results can be noted that the evolutionary algorithm not only gets good solutions, but also a more predictable running time when compared to sequential genetic algorithm.
Multiresolution strategies for the numerical solution of optimal control problems
NASA Astrophysics Data System (ADS)
Jain, Sachin
There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a
NASA Technical Reports Server (NTRS)
Korkin, S.; Lyapustin, A.
2012-01-01
The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer s rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request.
Simulation of multicorrelated random processes using the FFT algorithm
NASA Technical Reports Server (NTRS)
Wittig, L. E.; Sinha, A. K.
1975-01-01
A technique for the digital simulation of multicorrelated Gaussian random processes is described. This technique is based upon generating discrete frequency functions which correspond to the Fourier transform of the desired random processes, and then using the fast Fourier transform (FFT) algorithm to obtain the actual random processes. The main advantage of this method of simulation over other methods is computation time; it appears to be more than an order of magnitude faster than present methods of simulation. One of the main uses of multicorrelated simulated random processes is in solving nonlinear random vibration problems by numerical integration of the governing differential equations. The response of a nonlinear string to a distributed noise input is presented as an example.
Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm
NASA Astrophysics Data System (ADS)
Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David
2015-04-01
The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.
Disruptive Innovation in Numerical Hydrodynamics
Waltz, Jacob I.
2012-09-06
We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.
Visualizing output for a data learning algorithm
NASA Astrophysics Data System (ADS)
Carson, Daniel; Graham, James; Ternovskiy, Igor
2016-05-01
This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.
Example based lesion segmentation
NASA Astrophysics Data System (ADS)
Roy, Snehashis; He, Qing; Carass, Aaron; Jog, Amod; Cuzzocreo, Jennifer L.; Reich, Daniel S.; Prince, Jerry; Pham, Dzung
2014-03-01
Automatic and accurate detection of white matter lesions is a significant step toward understanding the progression of many diseases, like Alzheimer's disease or multiple sclerosis. Multi-modal MR images are often used to segment T2 white matter lesions that can represent regions of demyelination or ischemia. Some automated lesion segmentation methods describe the lesion intensities using generative models, and then classify the lesions with some combination of heuristics and cost minimization. In contrast, we propose a patch-based method, in which lesions are found using examples from an atlas containing multi-modal MR images and corresponding manual delineations of lesions. Patches from subject MR images are matched to patches from the atlas and lesion memberships are found based on patch similarity weights. We experiment on 43 subjects with MS, whose scans show various levels of lesion-load. We demonstrate significant improvement in Dice coefficient and total lesion volume compared to a state of the art model-based lesion segmentation method, indicating more accurate delineation of lesions.
Example Based Lesion Segmentation
Roy, Snehashis; He, Qing; Carass, Aaron; Jog, Amod; Cuzzocreo, Jennifer L.; Reich, Daniel S.; Prince, Jerry; Pham, Dzung
2016-01-01
Automatic and accurate detection of white matter lesions is a significant step toward understanding the progression of many diseases, like Alzheimer’s disease or multiple sclerosis. Multi-modal MR images are often used to segment T2 white matter lesions that can represent regions of demyelination or ischemia. Some automated lesion segmentation methods describe the lesion intensities using generative models, and then classify the lesions with some combination of heuristics and cost minimization. In contrast, we propose a patch-based method, in which lesions are found using examples from an atlas containing multi-modal MR images and corresponding manual delineations of lesions. Patches from subject MR images are matched to patches from the atlas and lesion memberships are found based on patch similarity weights. We experiment on 43 subjects with MS, whose scans show various levels of lesion-load. We demonstrate significant improvement in Dice coefficient and total lesion volume compared to a state of the art model-based lesion segmentation method, indicating more accurate delineation of lesions.
Some nonlinear space decomposition algorithms
Tai, Xue-Cheng; Espedal, M.
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Modified OMP Algorithm for Exponentially Decaying Signals
Kazimierczuk, Krzysztof; Kasprzak, Paweł
2015-01-01
A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044
Performance Comparison Of Evolutionary Algorithms For Image Clustering
NASA Astrophysics Data System (ADS)
Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.
2014-09-01
Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Baczewski, Andrew D; Bond, Stephen D
2013-07-28
Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.
NASA Astrophysics Data System (ADS)
Baczewski, Andrew D.; Bond, Stephen D.
2013-07-01
Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.
A hybrid likelihood algorithm for risk modelling.
Kellerer, A M; Kreisheimer, M; Chmelevsky, D; Barclay, D
1995-03-01
The risk of radiation-induced cancer is assessed through the follow-up of large cohorts, such as atomic bomb survivors or underground miners who have been occupationally exposed to radon and its decay products. The models relate to the dose, age and time dependence of the excess tumour rates, and they contain parameters that are estimated in terms of maximum likelihood computations. The computations are performed with the software package EPI-CURE, which contains the two main options of person-by person regression or of Poisson regression with grouped data. The Poisson regression is most frequently employed, but there are certain models that require an excessive number of cells when grouped data are used. One example involves computations that account explicitly for the temporal distribution of continuous exposures, as they occur with underground miners. In past work such models had to be approximated, but it is shown here that they can be treated explicitly in a suitably reformulated person-by person computation of the likelihood. The algorithm uses the familiar partitioning of the log-likelihood into two terms, L1 and L0. The first term, L1, represents the contribution of the 'events' (tumours). It needs to be evaluated in the usual way, but constitutes no computational problem. The second term, L0, represents the event-free periods of observation. It is, in its usual form, unmanageable for large cohorts. However, it can be reduced to a simple form, in which the number of computational steps is independent of cohort size. The method requires less computing time and computer memory, but more importantly it leads to more stable numerical results by obviating the need for grouping the data. The algorithm may be most relevant to radiation risk modelling, but it can facilitate the modelling of failure-time data in general. PMID:7604154
Numerical methods: Analytical benchmarking in transport theory
Ganapol, B.D. )
1988-01-01
Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered.
Integral and integrable algorithms for a nonlinear shallow-water wave equation
NASA Astrophysics Data System (ADS)
Camassa, Roberto; Huang, Jingfang; Lee, Long
2006-08-01
An asymptotic higher-order model of wave dynamics in shallow water is examined in a combined analytical and numerical study, with the aim of establishing robust and efficient numerical solution methods. Based on the Hamiltonian structure of the nonlinear equation, an algorithm corresponding to a completely integrable particle lattice is implemented first. Each "particle" in the particle method travels along a characteristic curve. The resulting system of nonlinear ordinary differential equations can have solutions that blow-up in finite time. We isolate the conditions for global existence and prove l1-norm convergence of the method in the limit of zero spatial step size and infinite particles. The numerical results show that this method captures the essence of the solution without using an overly large number of particles. A fast summation algorithm is introduced to evaluate the integrals of the particle method so that the computational cost is reduced from O( N2) to O( N), where N is the number of particles. The method possesses some analogies with point vortex methods for 2D Euler equations. In particular, near singular solutions exist and singularities are prevented from occurring in finite time by mechanisms akin to those in the evolution of vortex patches. The second method is based on integro-differential formulations of the equation. Two different algorithms are proposed, based on different ways of extracting the time derivative of the dependent variable by an appropriately defined inverse operator. The integro-differential formulations reduce the order of spatial derivatives, thereby relaxing the stability constraint and allowing large time steps in an explicit numerical scheme. In addition to the Cauchy problem on the infinite line, we include results on the study of the nonlinear equation posed in the quarter (space-time) plane. We discuss the minimum number of boundary conditions required for solution uniqueness and illustrate this with numerical
Ten years of Nature Physics: Numerical models come of age
NASA Astrophysics Data System (ADS)
Gull, E.; Millis, A. J.
2015-10-01
When Nature Physics celebrated 20 years of high-temperature superconductors, numerical approaches were on the periphery. Since then, new ideas implemented in new algorithms are leading to new insights.
SCORPIUS algorithm benchmarks on the image understanding architecture machine
NASA Astrophysics Data System (ADS)
Bogdanowicz, Julius F.; Nash, J. Gregory; Shu, David B.
1992-04-01
Many Hughes tactical and strategic programs need high performance image processing. For example, photo-interpretation applications can require up to four orders of magnitude speedup over conventional computer architectures. Therefore, parallel processing systems are needed to help close the processing gap. Vision applications can usually be decomposed into three levels of processing called high, intermediate, and low level vision. Each processing level typically requires different types of numeric/symbolic computation, processing task granularities, and communications bandwidths. No parallel processing system is commercially available that is optimized for the entire range of computations. To meet these processing challenges, the image understanding architecture (IUA) has been developed by Hughes in collaboration with the University of Massachusetts. The IUA is a heterogeneous, hierarchical, associative parallel processor that is organized in three levels corresponding to the vision problem. Its lowest level consists of a large content addressable array parallel processor. This array of 'per pixel' bit serial processors is used for fixed point, low level numeric, and symbolic computations. The middle level is an interface communications array processor (ICAP). ICAP is an array of digital signal processing chips from TI TMS320Cx line, used for high speed number crunching. The highest level is the symbolic processing array. It is an array of general purpose microprocessors in which the artificial intelligence content of the image understanding software resides. A set of benchmarks from the DARPA/ORD sponsored SCORPIUS program were developed using the IUA. The set of algorithms included low level image processing as well as high level matching algorithms. Benchmark performance on the second generation IUA hardware is over four orders of magnitude faster than equivalent algorithms implemented on a DEC VAX 8650. The first generation hardware is operational. Development
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Solution algorithm of a quasi-Lambert's problem with fixed flight-direction angle constraint
NASA Astrophysics Data System (ADS)
Luo, Qinqin; Meng, Zhanfeng; Han, Chao
2011-04-01
A two-point boundary value problem of the Kepler orbit similar to Lambert's problem is proposed. The problem is to find a Kepler orbit that will travel through the initial and final points in a specified flight time given the radial distances of the two points and the flight-direction angle at the initial point. The Kepler orbits that meet the geometric constraints are parameterized via the universal variable z introduced by Bate. The formula for flight time of the orbits is derived. The admissible interval of the universal variable and the variation pattern of the flight time are explored intensively. A numerical iteration algorithm based on the analytical results is presented to solve the problem. A large number of randomly generated examples are used to test the reliability and efficiency of the algorithm.
Use of the particle swarm optimization algorithm for second order design of levelling networks
NASA Astrophysics Data System (ADS)
Yetkin, Mevlut; Inal, Cevat; Yigit, Cemal Ozer
2009-08-01
The weight problem in geodetic networks can be dealt with as an optimization procedure. This classic problem of geodetic network optimization is also known as second-order design. The basic principles of geodetic network optimization are reviewed. Then the particle swarm optimization (PSO) algorithm is applied to a geodetic levelling network in order to solve the second-order design problem. PSO, which is an iterative-stochastic search algorithm in swarm intelligence, emulates the collective behaviour of bird flocking, fish schooling or bee swarming, to converge probabilistically to the global optimum. Furthermore, it is a powerful method because it is easy to implement and computationally efficient. Second-order design of a geodetic levelling network using PSO yields a practically realizable solution. It is also suitable for non-linear matrix functions that are very often encountered in geodetic network optimization. The fundamentals of the method and a numeric example are given.
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Carson, John M., III
2006-01-01
A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.
NASA Technical Reports Server (NTRS)
Acikmese, Behcet A.; Carson, John M., III
2005-01-01
A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.
FBP Algorithms for Attenuated Fan-Beam Projections
You, Jiangsheng; Zeng, Gengsheng L.; Liang, Zhengrong
2005-01-01
A filtered backprojection (FBP) reconstruction algorithm for attenuated fan-beam projections has been derived based on Novikov’s inversion formula. The derivation uses a common transformation between parallel-beam and fan-beam coordinates. The filtering is shift-invariant. Numerical evaluation of the FBP algorithm is presented as well. As a special application, we also present a shift-invariant FBP algorithm for fan-beam SPECT reconstruction with uniform attenuation compensation. Several other fan-beam reconstruction algorithms are also discussed. In the attenuation-free case, our algorithm reduces to the conventional fan-beam FBP reconstruction algorithm. PMID:16570111
Hedging rule for reservoir operations: 2. A numerical model
NASA Astrophysics Data System (ADS)
You, Jiing-Yun; Cai, Ximing
2008-01-01
Optimization models for reservoir operation analysis usually use a heuristic algorithm to search for the hedging rule. This paper presents a method that derives a hedging rule from theoretical analysis (J.-Y. You and X. Cai, 2008) with an explicit two-period Markov hydrology model, a particular form of nonlinear utility function, and a given inflow probability distribution. The unique procedure is to embed hedging rule derivation based on the marginal utility principle into reservoir operation simulation. The simulation method embedded with the optimization principle for hedging rule derivation will avoid both the inaccuracy problem caused by trail and error with traditional simulation models and the computational difficulty ("curse of dimensionality") with optimization models. Results show utility improvement with the hedging policy compared to the standard operation policy (SOP), considering factors such as reservoir capacity, inflow level and uncertainty, price elasticity and discount rate. Following the theoretical analysis presented in the companion paper, the condition for hedging application, the starting water availability and ending water availability for hedging, is reexamined with the numerical example; the probabilistic performance of hedging and SOP regarding water supply reliability is compared; and some findings from the theoretical analysis are verified numerically.
Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark
2002-01-01
Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.
Numerical study of Taylor bubbles with adaptive unstructured meshes
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Pavlidis, Dimitrios; Percival, James; Pain, Chris; Matar, Omar; Hasan, Abbas; Azzopardi, Barry
2014-11-01
The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube. This type of bubble flow regime often occurs in gas-liquid slug flows in many industrial applications, including oil-and-gas production, chemical and nuclear reactors, and heat exchangers. The objective of this study is to investigate the fluid dynamics of Taylor bubbles rising in a vertical pipe filled with oils of extremely high viscosity (mimicking the ``heavy oils'' found in the oil-and-gas industry). A modelling and simulation framework is presented here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rise and reduce the computational effort without sacrificing accuracy. The numerical framework consists of a mixed control-volume and finite-element formulation, a ``volume of fluid''-type method for the interface capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Numerical examples of some benchmark tests and the dynamics of Taylor bubbles are presented to show the capability of this method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.
NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.
LUCCIO, A.; D'IMPERIO, N.; MALITSKY, N.
2005-09-12
Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.
Numerical recipes for mold filling simulation
Kothe, D.; Juric, D.; Lam, K.; Lally, B.
1998-07-01
Has the ability to simulate the filling of a mold progressed to a point where an appropriate numerical recipe achieves the desired results? If results are defined to be topological robustness, computational efficiency, quantitative accuracy, and predictability, all within a computational domain that faithfully represents complex three-dimensional foundry molds, then the answer unfortunately remains no. Significant interfacial flow algorithm developments have occurred over the last decade, however, that could bring this answer closer to maybe. These developments have been both evolutionary and revolutionary, will continue to transpire for the near future. Might they become useful numerical recipes for mold filling simulations? Quite possibly. Recent progress in algorithms for interface kinematics and dynamics, linear solution methods, computer science issues such as parallelization and object-oriented programming, high resolution Navier-Stokes (NS) solution methods, and unstructured mesh techniques, must all be pursued as possible paths toward higher fidelity mold filling simulations. A detailed exposition of these algorithmic developments is beyond the scope of this paper, hence the authors choose to focus here exclusively on algorithms for interface kinematics. These interface tracking algorithms are designed to model the movement of interfaces relative to a reference frame such as a fixed mesh. Current interface tracking algorithm choices are numerous, so is any one best suited for mold filling simulation? Although a clear winner is not (yet) apparent, pros and cons are given in the following brief, critical review. Highlighted are those outstanding interface tracking algorithm issues the authors feel can hamper the reliable modeling of today`s foundry mold filling processes.
Fast proximity algorithm for MAP ECT reconstruction
NASA Astrophysics Data System (ADS)
Li, Si; Krol, Andrzej; Shen, Lixin; Xu, Yuesheng
2012-03-01
We arrived at the fixed-point formulation of the total variation maximum a posteriori (MAP) regularized emission computed tomography (ECT) reconstruction problem and we proposed an iterative alternating scheme to numerically calculate the fixed point. We theoretically proved that our algorithm converges to unique solutions. Because the obtained algorithm exhibits slow convergence speed, we further developed the proximity algorithm in the transformed image space, i.e. the preconditioned proximity algorithm. We used the bias-noise curve method to select optimal regularization hyperparameters for both our algorithm and expectation maximization with total variation regularization (EM-TV). We showed in the numerical experiments that our proposed algorithms, with an appropriately selected preconditioner, outperformed conventional EM-TV algorithm in many critical aspects, such as comparatively very low noise and bias for Shepp-Logan phantom. This has major ramification for nuclear medicine because clinical implementation of our preconditioned fixed-point algorithms might result in very significant radiation dose reduction in the medical applications of emission tomography.
Five-dimensional Janis-Newman algorithm
NASA Astrophysics Data System (ADS)
Erbin, Harold; Heurtier, Lucien
2015-08-01
The Janis-Newman algorithm has been shown to be successful in finding new stationary solutions of four-dimensional gravity. Attempts for a generalization to higher dimensions have already been found for the restricted cases with only one angular momentum. In this paper we propose an extension of this algorithm to five-dimensions with two angular momenta—using the prescription of Giampieri—through two specific examples, that are the Myers-Perry and BMPV black holes. We also discuss possible enlargements of our prescriptions to other dimensions and maximal number of angular momenta, and show how dimensions higher than six appear to be much more challenging to treat within this framework. Nonetheless this general algorithm provides a unification of the formulation in d=3,4,5 of the Janis-Newman algorithm, from which several examples are exposed, including the BTZ black hole.
A local anisotropic adaptive algorithm for the solution of low-Mach transient combustion problems
NASA Astrophysics Data System (ADS)
Carpio, Jaime; Prieto, Juan Luis; Vera, Marcos
2016-02-01
A novel numerical algorithm for the simulation of transient combustion problems at low Mach and moderately high Reynolds numbers is presented. These problems are often characterized by the existence of a large disparity of length and time scales, resulting in the development of directional flow features, such as slender jets, boundary layers, mixing layers, or flame fronts. This makes local anisotropic adaptive techniques quite advantageous computationally. In this work we propose a local anisotropic refinement algorithm using, for the spatial discretization, unstructured triangular elements in a finite element framework. For the time integration, the problem is formulated in the context of semi-Lagrangian schemes, introducing the semi-Lagrange-Galerkin (SLG) technique as a better alternative to the classical semi-Lagrangian (SL) interpolation. The good performance of the numerical algorithm is illustrated by solving a canonical laminar combustion problem: the flame/vortex interaction. First, a premixed methane-air flame/vortex interaction with simplified transport and chemistry description (Test I) is considered. Results are found to be in excellent agreement with those in the literature, proving the superior performance of the SLG scheme when compared with the classical SL technique, and the advantage of using anisotropic adaptation instead of uniform meshes or isotropic mesh refinement. As a more realistic example, we then conduct simulations of non-premixed hydrogen-air flame/vortex interactions (Test II) using a more complex combustion model which involves state-of-the-art transport and chemical kinetics. In addition to the analysis of the numerical features, this second example allows us to perform a satisfactory comparison with experimental visualizations taken from the literature.
High-resolution algorithms for the Navier-Stokes equations for generalized discretizations
NASA Astrophysics Data System (ADS)
Mitchell, Curtis Randall
Accurate finite volume solution algorithms for the two dimensional Navier Stokes equations and the three dimensional Euler equations for both structured and unstructured grid topologies are presented. Results for two dimensional quadrilateral and triangular elements and three dimensional tetrahedral elements will be provided. Fundamental to the solution algorithm is a technique for generating multidimensional polynomials which model the spatial variation of the flow variables. Cell averaged data is used to reconstruct pointwise distributions of the dependent variables. The reconstruction errors are evaluated on triangular meshes. The implementation of the algorithm is unique in that three reconstructions are performed for each cell face in the domain. Two of the reconstructions are used to evaluate the inviscid fluxes and correspond to the right and left interface states needed for the solution of a Riemann problem. The third reconstruction is used to evaluate the viscous fluxes. The gradient terms that appear in the viscous fluxes are formed by simply differentiating the polynomial. By selecting the appropriate cell control volumes, centered, upwind and upwind-biased stencils are possible. Numerical calculations in two dimensions include solutions to elliptic boundary value problems, Ringlebs' flow, an inviscid shock reflection, a flat plate boundary layer, and a shock induced separation over a flat plate. Three dimensional results include the ONERA M6 wing. All of the unstructured grids were generated using an advancing front mesh generation procedure. Modifications to the three dimensional grid generator were necessary to discretize the surface grids for bodies with high curvature. In addition, mesh refinement algorithms were implemented to improve the surface grid integrity. Examples include a Glasair fuselage, High Speed Civil Transport, and the ONERA M6 wing. The role of reconstruction as applied to adaptive remeshing is discussed and a new first order error
Two numerical models for landslide dynamic analysis
NASA Astrophysics Data System (ADS)
Hungr, Oldrich; McDougall, Scott
2009-05-01
Two microcomputer-based numerical models (Dynamic ANalysis (DAN) and three-dimensional model DAN (DAN3D)) have been developed and extensively used for analysis of landslide runout, specifically for the purposes of practical landslide hazard and risk assessment. The theoretical basis of both models is a system of depth-averaged governing equations derived from the principles of continuum mechanics. Original features developed specifically during this work include: an open rheological kernel; explicit use of tangential strain to determine the tangential stress state within the flowing sheet, which is both more realistic and beneficial to the stability of the model; orientation of principal tangential stresses parallel with the direction of motion; inclusion of the centripetal forces corresponding to the true curvature of the path in the motion direction and; the use of very simple and highly efficient free surface interpolation methods. Both models yield similar results when applied to the same sets of input data. Both algorithms are designed to work within the semi-empirical framework of the "equivalent fluid" approach. This approach requires selection of material rheology and calibration of input parameters through back-analysis of real events. Although approximate, it facilitates simple and efficient operation while accounting for the most important characteristics of extremely rapid landslides. The two models have been verified against several controlled laboratory experiments with known physical basis. A large number of back-analyses of real landslides of various types have also been carried out. One example is presented. Calibration patterns are emerging, which give a promise of predictive capability.
Large space structures control algorithm characterization
NASA Technical Reports Server (NTRS)
Fogel, E.
1983-01-01
Feedback control algorithms are developed for sensor/actuator pairs on large space systems. These algorithms have been sized in terms of (1) floating point operation (FLOP) demands; (2) storage for variables; and (3) input/output data flow. FLOP sizing (per control cycle) was done as a function of the number of control states and the number of sensor/actuator pairs. Storage for variables and I/O sizing was done for specific structure examples.
NASA Astrophysics Data System (ADS)
Liliana Gheorghian, Mariana
2014-05-01
beginning of the XXI century" with the participation of several schools in the country in 2009 and 2011. The papers presented were diverse and gave examples of various teaching experiences and scientific information. Topics by the teachers: The impact of tourism on the environment, Tornadoes, Natural science and environmental education in school, Air Pollution and health, Ecological education of children from primary school, The effects of electromagnetic radiation, Formation of an ecological mentality using chemistry, Why should we protect water, Environmental education, Education for the future, SOS Nature, Science in the twenty-first century, etc. Topics by students: Nature- the palace of thermal phenomena, Life depends on heat, Water Mysteries, Global Heating, The Mysterious universe, etc. In March 2013 our school hosted an interesting exchange of ideas on environmental issues between our students and those from Bulgaria, Poland and Turkey, during a symposium of the Comenius multilateral project "Conserving Nature". In order to present the results of protecting nature in their communities, two projects "Citizen" qualified in the Program Civitas in the autumn of 2013. "The Battle" continues both in nature and in classrooms, in order to preserve the environment.
Numerical modeling of Waianae Harbor
Mader, C.L.; Lucas, S.
1985-01-01
The Waianae harbor problem is an example of the use of numerical modeling techniques available at JTRE of the University of Hawaii to assist in the evaluation of oceanographic fluid dynamic flow problems. The numerical techniques are available to assist in the modeling of many problems of interest to the Hawaii Ocean Experiment. One application that has received considerable effort is the formation, propagation, and run-up of tsunami waves. The interaction of tsunami waves with the island chain is an important problem that needs more study. The models can be used to study storm surge interaction with the Hawaii islands and current and circulation around and through the islands. It is important that the modeling not be limited to the usual nonlinear shallow-water models, since they are inappropriate for many of the problems of interest to the Hawaii Ocean Experiment. 6 references, 5 figures.
Relative performance of algorithms for autonomous satellite orbit determination
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Peters, J. G.; Schutz, B. E.
1981-01-01
Limited word size in contemporary microprocessors causes numerical problems in autonomous satellite navigation applications. Numerical error introduced in navigation computations performed on small wordlength machines can cause divergence of sequential estimation algorithms. To insure filter reliability, square root algorithms have been adopted in many applications. The optimal navigation algorithm requires a careful match of the estimation algorithm, dynamic model, and numerical integrator. In this investigation, the relationship of several square root filters and numerical integration methods is evaluated to determine their relative performance for satellite navigation applications. The numerical simulations are conducted using the Phase I GPS constellation to determine the orbit of a LANDSAT-D type satellite. The primary comparison is based on computation time and relative estimation accuracy.
A GROUP FINDING ALGORITHM FOR MULTIDIMENSIONAL DATA SETS
Sharma, Sanjib; Johnston, Kathryn V. E-mail: kvj@astro.columbia.ed
2009-09-20
We describe a density-based hierarchical group finding algorithm capable of identifying structures and substructures of any shape and density in multidimensional data sets where each dimension can be a numeric attribute with arbitrary measurement scale. This has applications in a wide variety of fields from finding structures in galaxy redshift surveys, to identifying halos and subhalos in N-body simulations and group finding in Local Group chemodynamical data sets. In general, clustering schemes require an a priori definition of a metric (a non-negative function that gives the distance between two points in a space) and the quality of clustering depends upon this choice. The general practice is to use a constant global metric which is optimal only if the clusters in the data are self-similar. For complex data configurations even the most finely tuned constant global metric turns out to be suboptimal. Moreover, the correct choice of metric also becomes increasingly important as the number of dimensions increase. To address these problems, we present an entropy-based binary space partitioning algorithm which uses a locally adaptive metric for each data point. The metric is employed to calculate the density at each point and a list of its nearest neighbors, and this information is then used to form a hierarchy of groups. Finally, the ratio of maximum to minimum density of points in a group is used to estimate the significance of the groups. Setting a threshold on this significance can effectively screen out groups arising due to Poisson noise and helps organize the groups into meaningful clusters. For a data set of N points, the algorithm requires only O(N) space and O(N(log N){sup 3}) time which makes it ideally suitable for analyzing large data sets. As an example, we apply the algorithm to identify structures in a simulated stellar halo using the full six-dimensional phase space coordinates.
Algorithm for in-flight gyroscope calibration
NASA Technical Reports Server (NTRS)
Davenport, P. B.; Welter, G. L.
1988-01-01
An optimal algorithm for the in-flight calibration of spacecraft gyroscope systems is presented. Special consideration is given to the selection of the loss function weight matrix in situations in which the spacecraft attitude sensors provide significantly more accurate information in pitch and yaw than in roll, such as will be the case in the Hubble Space Telescope mission. The results of numerical tests that verify the accuracy of the algorithm are discussed.
Supercomputers and biological sequence comparison algorithms.
Core, N G; Edmiston, E W; Saltz, J H; Smith, R M
1989-12-01
Comparison of biological (DNA or protein) sequences provides insight into molecular structure, function, and homology and is increasingly important as the available databases become larger and more numerous. One method of increasing the speed of the calculations is to perform them in parallel. We present the results of initial investigations using two dynamic programming algorithms on the Intel iPSC hypercube and the Connection Machine as well as an inexpensive, heuristically-based algorithm on the Encore Multimax.
GPU Accelerated Event Detection Algorithm
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less
GPU Accelerated Event Detection Algorithm
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but also model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.
Numerical algorithms for finite element computations on concurrent processors
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1986-01-01
The work of several graduate students which relate to the NASA grant is briefly summarized. One student has worked on a detailed analysis of the so-called ijk forms of Gaussian elemination and Cholesky factorization on concurrent processors. Another student has worked on the vectorization of the incomplete Cholesky conjugate method on the CYBER 205. Two more students implemented various versions of Gaussian elimination and Cholesky factorization on the FLEX/32.
Numerical algorithms for finite element computations on arrays of microprocessors
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1981-01-01
The development of a multicolored successive over relaxation (SOR) program for the finite element machine is discussed. The multicolored SOR method uses a generalization of the classical Red/Black grid point ordering for the SOR method. These multicolored orderings have the advantage of allowing the SOR method to be implemented as a Jacobi method, which is ideal for arrays of processors, but still enjoy the greater rate of convergence of the SOR method. The program solves a general second order self adjoint elliptic problem on a square region with Dirichlet boundary conditions, discretized by quadratic elements on triangular regions. For this general problem and discretization, six colors are necessary for the multicolored method to operate efficiently. The specific problem that was solved using the six color program was Poisson's equation; for Poisson's equation, three colors are necessary but six may be used. In general, the number of colors needed is a function of the differential equation, the region and boundary conditions, and the particular finite element used for the discretization.
Evaluating numerical ODE/DAE methods, algorithms and software
NASA Astrophysics Data System (ADS)
Soderlind, Gustaf; Wang, Lina
2006-01-01
Until recently, the testing of ODE/DAE software has been limited to simple comparisons and benchmarking. The process of developing software from a mathematically specified method is complex: it entails constructing control structures and objectives, selecting iterative methods and termination criteria, choosing norms and many more decisions. Most software constructors have taken a heuristic approach to these design choices, and as a consequence two different implementations of the same method may show significant differences in performance. Yet it is common to try to deduce from software comparisons that one method is better than another. Such conclusions are not warranted, however, unless the testing is carried out under true ceteris paribus conditions. Moreover, testing is an empirical science and as such requires a formal test protocol; without it conclusions are questionable, invalid or even false.We argue that ODE/DAE software can be constructed and analyzed by proven, "standard" scientific techniques instead of heuristics. The goals are computational stability, reproducibility, and improved software quality. We also focus on different error criteria and norms, and discuss modifications to DASPK and RADAU5. Finally, some basic principles of a test protocol are outlined and applied to testing these codes on a variety of problems.
Extremal polynomials and methods of optimization of numerical algorithms
Lebedev, V I
2004-10-31
Chebyshev-Markov-Bernstein-Szegoe polynomials C{sub n}(x) extremal on [-1,1] with weight functions w(x)=(1+x){sup {alpha}}(1- x){sup {beta}}/{radical}(S{sub l}(x)) where {alpha},{beta}=0,1/2 and S{sub l}(x)={pi}{sub k=1}{sup m}(1-c{sub k}T{sub l{sub k}}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w{sup 2}(x)(1-x{sup 2}){sup -1/2}. The parameters of optimal Chebyshev iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshev filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.
Airplane numerical simulation for the rapid prototyping process
NASA Astrophysics Data System (ADS)
Roysdon, Paul F.
Airplane Numerical Simulation for the Rapid Prototyping Process is a comprehensive research investigation into the most up-to-date methods for airplane development and design. Uses of modern engineering software tools, like MatLab and Excel, are presented with examples of batch and optimization algorithms which combine the computing power of MatLab with robust aerodynamic tools like XFOIL and AVL. The resulting data is demonstrated in the development and use of a full non-linear six-degrees-of-freedom simulator. The applications for this numerical tool-box vary from un-manned aerial vehicles to first-order analysis of manned aircraft. A Blended-Wing-Body airplane is used for the analysis to demonstrate the flexibility of the code from classic wing-and-tail configurations to less common configurations like the blended-wing-body. This configuration has been shown to have superior aerodynamic performance -- in contrast to their classic wing-and-tube fuselage counterparts -- and have reduced sensitivity to aerodynamic flutter as well as potential for increased engine noise abatement. Of course without a classic tail elevator to damp the nose up pitching moment, and the vertical tail rudder to damp the yaw and possible rolling aerodynamics, the challenges in lateral roll and yaw stability, as well as pitching moment are not insignificant. This thesis work applies the tools necessary to perform the airplane development and optimization on a rapid basis, demonstrating the strength of this tool through examples and comparison of the results to similar airplane performance characteristics published in literature.
User`s guide for the frequency domain algorithms in the LIFE2 fatigue analysis code
Sutherland, H.J.; Linker, R.L.
1993-10-01
The LIFE2 computer code is a fatigue/fracture analysis code that is specialized to the analysis of wind turbine components. The numerical formulation of the code uses a series of cycle count matrices to describe the cyclic stress states imposed upon the turbine. However, many structural analysis techniques yield frequency-domain stress spectra and a large body of experimental loads (stress) data is reported in the frequency domain. To permit the analysis of this class of data, a Fourier analysis is used to transform a frequency-domain spectrum to an equivalent time series suitable for rainflow counting by other modules in the code. This paper describes the algorithms incorporated into the code and their numerical implementation. Example problems are used to illustrate typical inputs and outputs.
A nonlinear model reference adaptive inverse control algorithm with pre-compensator
NASA Astrophysics Data System (ADS)
Xiao, Bin; Yang, Tie-Jun; Liu, Zhi-Gang
2005-12-01
In this paper, the reduced-order modeling (ROM) technology and its corresponding linear theory are expanded from the linear dynamic system to the nonlinear one, and H ∞ control theory is employed in the frequency domain to design some nonlinear system s pre-compensator in some special way. The adaptive model inverse control (AMIC) theory coping with nonlinear system is improved as well. Such is the model reference adaptive inverse control with pre-compensator (PCMRAIC). The aim of that algorithm is to construct a strategy of control as a whole. As a practical example of the application, the numerical simulation has been given on matlab software packages. The numerical result is given. The proposed strategy realizes the linearization control of nonlinear dynamic system. And it carries out a good performance to deal with the nonlinear system.
NASA Astrophysics Data System (ADS)
Tichy, Wolfgang; McDonald, Jonathan R.; Miller, Warner A.
2015-01-01
We present a new numerical method for the isometric embedding of 2-geometries specified by their 2-metrics in three-dimensional Euclidean space. Our approach is to directly solve the fundamental embedding equation supplemented by six conditions that fix translations and rotations of the embedded surface. This set of equations is discretized by means of a pseudospectral collocation point method. The resulting nonlinear system of equations are then solved by a Newton-Raphson scheme. We explain our numerical algorithm in detail. By studying several examples we show that our method converges provided we start the Newton-Raphson scheme from a suitable initial guess. Our novel method is very efficient for smooth 2-metrics.
Atmospheric channel for bistatic optical communication: simulation algorithms
NASA Astrophysics Data System (ADS)
Belov, V. V.; Tarasenkov, M. V.
2015-11-01
Three algorithms of statistical simulation of the impulse response (IR) for the atmospheric optical communication channel are considered, including algorithms of local estimate and double local estimate and the algorithm suggested by us. On the example of a homogeneous molecular atmosphere it is demonstrated that algorithms of double local estimate and the suggested algorithm are more efficient than the algorithm of local estimate. For small optical path length, the proposed algorithm is more efficient, and for large optical path length, the algorithm of double local estimate is more efficient. Using the proposed algorithm, the communication quality is estimated for a particular case of the atmospheric channel under conditions of intermediate turbidity. The communication quality is characterized by the maximum IR, time of maximum IR, integral IR, and bandwidth of the communication channel. Calculations of these criteria demonstrated that communication is most efficient when the point of intersection of the directions toward the source and the receiver is most close to the source point.
Numerical integration of asymptotic solutions of ordinary differential equations
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1989-01-01
Classical asymptotic analysis of ordinary differential equations derives approximate solutions that are numerically stable. However, the analysis also leads to tedious expansions in powers of the relevant parameter for a particular problem. The expansions are replaced with integrals that can be evaluated by numerical integration. The resulting numerical solutions retain the linear independence that is the main advantage of asymptotic solutions. Examples, including the Falkner-Skan equation from laminar boundary layer theory, illustrate the method of asymptotic analysis with numerical integration.
NASA Astrophysics Data System (ADS)
Taitano, W. T.; Chacón, L.; Simakov, A. N.; Molvig, K.
2015-09-01
In this study, we demonstrate a fully implicit algorithm for the multi-species, multidimensional Rosenbluth-Fokker-Planck equation which is exactly mass-, momentum-, and energy-conserving, and which preserves positivity. Unlike most earlier studies, we base our development on the Rosenbluth (rather than Landau) form of the Fokker-Planck collision operator, which reduces complexity while allowing for an optimal fully implicit treatment. Our discrete conservation strategy employs nonlinear constraints that force the continuum symmetries of the collision operator to be satisfied upon discretization. We converge the resulting nonlinear system iteratively using Jacobian-free Newton-Krylov methods, effectively preconditioned with multigrid methods for efficiency. Single- and multi-species numerical examples demonstrate the advertised accuracy properties of the scheme, and the superior algorithmic performance of our approach. In particular, the discretization approach is numerically shown to be second-order accurate in time and velocity space and to exhibit manifestly positive entropy production. That is, H-theorem behavior is indicated for all the examples we have tested. The solution approach is demonstrated to scale optimally with respect to grid refinement (with CPU time growing linearly with the number of mesh points), and timestep (showing very weak dependence of CPU time with time-step size). As a result, the proposed algorithm delivers several orders-of-magnitude speedup vs. explicit algorithms.
Active learning in the presence of unlabelable examples
NASA Technical Reports Server (NTRS)
Mazzoni, Dominic; Wagstaff, Kiri
2004-01-01
We propose a new active learning framework where the expert labeler is allowed to decline to label any example. This may be necessary because the true label is unknown or because the example belongs to a class that is not part of the real training problem. We show that within this framework, popular active learning algorithms (such as Simple) may perform worse than random selection because they make so many queries to the unlabelable class. We present a method by which any active learning algorithm can be modified to avoid unlabelable examples by training a second classifier to distinguish between the labelable and unlabelable classes. We also demonstrate the effectiveness of the method on two benchmark data sets and a real-world problem.
Query by image example: The CANDID approach
Kelly, P.M.; Cannon, M.; Hush, D.R.
1995-02-01
CANDID (Comparison Algorithm for Navigating Digital Image Databases) was developed to enable content-based retrieval of digital imagery from large databases using a query-by-example methodology. A user provides an example image to the system, and images in the database that are similar to that example are retrieved. The development of CANDID was inspired by the N-gram approach to document fingerprinting, where a ``global signature`` is computed for every document in a database and these signatures are compared to one another to determine the similarity between any two documents. CANDID computes a global signature for every image in a database, where the signature is derived from various image features such as localized texture, shape, or color information. A distance between probability density functions of feature vectors is then used to compare signatures. In this paper, the authors present CANDID and highlight two results from their current research: subtracting a ``background`` signature from every signature in a database in an attempt to improve system performance when using inner-product similarity measures, and visualizing the contribution of individual pixels in the matching process. These ideas are applicable to any histogram-based comparison technique.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409
Algorithms for computing the multivariable stability margin
NASA Technical Reports Server (NTRS)
Tekawy, Jonathan A.; Safonov, Michael G.; Chiang, Richard Y.
1989-01-01
Stability margin for multiloop flight control systems has become a critical issue, especially in highly maneuverable aircraft designs where there are inherent strong cross-couplings between the various feedback control loops. To cope with this issue, we have developed computer algorithms based on non-differentiable optimization theory. These algorithms have been developed for computing the Multivariable Stability Margin (MSM). The MSM of a dynamical system is the size of the smallest structured perturbation in component dynamics that will destabilize the system. These algorithms have been coded and appear to be reliable. As illustrated by examples, they provide the basis for evaluating the robustness and performance of flight control systems.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-08-30
We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
Algorithms For Integrating Nonlinear Differential Equations
NASA Technical Reports Server (NTRS)
Freed, A. D.; Walker, K. P.
1994-01-01
Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.
Evolutionary development of path planning algorithms
Hage, M
1998-09-01
This paper describes the use of evolutionary software techniques for developing both genetic algorithms and genetic programs. Genetic algorithms are evolved to solve a specific problem within a fixed and known environment. While genetic algorithms can evolve to become very optimized for their task, they often are very specialized and perform poorly if the environment changes. Genetic programs are evolved through simultaneous training in a variety of environments to develop a more general controller behavior that operates in unknown environments. Performance of genetic programs is less optimal than a specially bred algorithm for an individual environment, but the controller performs acceptably under a wider variety of circumstances. The example problem addressed in this paper is evolutionary development of algorithms and programs for path planning in nuclear environments, such as Chernobyl.
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
Numerical model representation and validation strategies
Dolin, R.M.; Hefele, J.
1997-10-01
This paper describes model representation and validation strategies for use in numerical tools that define models in terms of topology, geometry, or topography. Examples of such tools include Computer-Assisted Engineering (CAE), Computer-Assisted Manufacturing (CAM), Finite Element Analysis (FEA), and Virtual Environment Simulation (VES) tools. These tools represent either physical objects or conceptual ideas using numerical models for the purpose of posing a question, performing a task, or generating information. Dependence on these numerical representations require that models be precise, consistent across different applications, and verifiable. This paper describes a strategy for ensuring precise, consistent, and verifiable numerical model representations in a topographic framework. The main assertion put forth is that topographic model descriptions are more appropriate for numerical applications than topological or geometrical descriptions. A topographic model verification and validation methodology is presented.
Distilling the Verification Process for Prognostics Algorithms
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai
2013-01-01
The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.
Numerical simulation of steady supersonic flow. [spatial marching
NASA Technical Reports Server (NTRS)
Schiff, L. B.; Steger, J. L.
1981-01-01
A noniterative, implicit, space-marching, finite-difference algorithm was developed for the steady thin-layer Navier-Stokes equations in conservation-law form. The numerical algorithm is applicable to steady supersonic viscous flow over bodies of arbitrary shape. In addition, the same code can be used to compute supersonic inviscid flow or three-dimensional boundary layers. Computed results from two-dimensional and three-dimensional versions of the numerical algorithm are in good agreement with those obtained from more costly time-marching techniques.
NASA Astrophysics Data System (ADS)
Qu, Shan; Zhou, Hui; Liu, Renwu; Chen, Yangkang; Zu, Shaohuan; Yu, Sa; Yuan, Jiang; Yang, Yahui
2016-08-01
In this paper, an improved algorithm is proposed to separate blended seismic data. We formulate the deblending problem as a regularization problem in both common receiver domain and frequency domain. It is suitable for different kinds of coding methods such as random time delay discussed in this paper. Two basic approximation frames, which are iterative shrinkage-thresholding algorithm (ISTA) and fast iterative shrinkage-thresholding algorithm (FISTA), are compared. We also derive the Lipschitz constant used in approximation frames. In order to achieve a faster convergence and higher accuracy, we propose to use firm-thresholding function as the thresholding function in ISTA and FISTA. Two synthetic blended examples demonstrate that the performances of four kinds of algorithms (ISTA with soft- and firm-thresholding, FISTA with soft- and firm-thresholding) are all effective, and furthermore FISTA with a firm-thresholding operator exhibits the most robust behavior. Finally, we show one numerically blended field data example processed by FISTA with firm-thresholding function.
Wire Detection Algorithms for Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.
2002-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning
Numerical Boundary Condition Procedures
NASA Technical Reports Server (NTRS)
1981-01-01
Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.
NASA Astrophysics Data System (ADS)
Liu, Jianzhou; Zhang, Juan
2011-08-01
In this article, applying the properties of M-matrix and non-negative matrix, utilising eigenvalue inequalities of matrix's sum and product, we firstly develop new upper and lower matrix bounds of the solution for discrete coupled algebraic Riccati equation (DCARE). Secondly, we discuss the solution existence uniqueness condition of the DCARE using the developed upper and lower matrix bounds and a fixed point theorem. Thirdly, a new fixed iterative algorithm of the solution for the DCARE is shown. Finally, the corresponding numerical examples are given to illustrate the effectiveness of the developed results.
Wu, Yu-Shu; Forsyth, Peter A.
2006-04-13
Numerical issues with modeling transport of chemicals or solute in realistic large-scale subsurface systems have been a serious concern, even with the continual progress made in both simulation algorithms and computer hardware in the past few decades. The problem remains and becomes even more difficult when dealing with chemical transport in a multiphase flow system using coarse, multidimensional regular or irregular grids, because of the known effects of numerical dispersion associated with moving plume fronts. We have investigated several total-variation-diminishing (TVD) or flux-limiter schemes by implementing and testing them in the T2R3D code, one of the TOUGH2 family of codes. The objectives of this paper are (1) to investigate the possibility of applying these TVD schemes, using multi-dimensional irregular unstructured grids, and (2) to help select more accurate spatial averaging methods for simulating chemical transport given a numerical grid or spatial discretization. We present an application example to show that such TVD schemes are able to effectively reduce numerical dispersion.
Dynamics of student modeling: A theory, algorithms, and application to quantum mechanics
NASA Astrophysics Data System (ADS)
Bao, Lei
A good understanding of how students understand physics is of great importance for developing and delivering effective instructions. This research is an attempt to develop a coherent theoretical and mathematical framework to mode the student learning of physics. The theoretical foundation is based on useful ideas from theories in cognitive science, education, and physics education. The emphasis of this research is made on the development of a mathematical representation to model the important mental elements and the dynamics of these elements, and on numerical algorithms that allow quantitative evaluations of conceptual learning in physics. In part I, a model-based theoretical framework is proposed. Based on the theory, a mathematical representation and a set of data analysis algorithms are developed. This new method is called Model Analysis, which can be used to obtain quantitative evaluations on student models with data from multiple-choice questions. Two specific algorithms are discussed with great detail. The first algorithm is the concentration factor. It measures how student responses on multiple-choice questions are distributed. A significant concentration on certain choices of the questions often implies common student models corresponding to those choices. The second algorithm analyzes student responses to form student model vectors and student model density matrix. By decomposing the density matrix, we can obtain quantitative evaluations of specific models used by students. Application examples with data from FCI, FMCE, and Wave Test are discussed along with special treatments of the data to deal with the unique features of the different tests. Implications on test design techniques are also discussed with the results from the examples. Based on the theory and algorithms developed in part I, research is conducted to investigate student understandings of quantum mechanics. Common student models on classical prerequisites and important quantum concepts are
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
High order hybrid numerical simulations of two dimensional detonation waves
NASA Technical Reports Server (NTRS)
Cai, Wei
1993-01-01
In order to study multi-dimensional unstable detonation waves, a high order numerical scheme suitable for calculating the detailed transverse wave structures of multidimensional detonation waves was developed. The numerical algorithm uses a multi-domain approach so different numerical techniques can be applied for different components of detonation waves. The detonation waves are assumed to undergo an irreversible, unimolecular reaction A yields B. Several cases of unstable two dimensional detonation waves are simulated and detailed transverse wave interactions are documented. The numerical results show the importance of resolving the detonation front without excessive numerical viscosity in order to obtain the correct cellular patterns.
Algorithm for Compressing Time-Series Data
NASA Technical Reports Server (NTRS)
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Some Examples of Trapped Surfaces
NASA Astrophysics Data System (ADS)
Bengtsson, Ingemar
2013-03-01
We present some simple pen and paper examples of trapped surfaces in order to help in visualising this key concept of the theory of gravitational collapse. We collect these examples from time-symmetric initial data, 2+1 dimensions, collapsing null shells, and the Vaidya solution.
Rent Seeking: A Textbook Example
ERIC Educational Resources Information Center
Pecorino, Paul
2007-01-01
The author argues that the college textbook market provides a clear example of monopoly seeking as described by Tullock (1967, 1980). This behavior is also known as rent seeking. Because this market is important to students, this example of rent seeking will be of particular interest to them. (Contains 24 notes.)
Constructing Programs from Example Computations.
ERIC Educational Resources Information Center
Bierman, A. W.; Krishnaswamy, R.
This paper describes the construction and implementation of an autoprogramming system. An autoprogrammer is an interactive computer programming system which automatically constructs computer programs from example computations executed by the user. The example calculations are done in a scratch pad fashion at a computer display, and the system…
Orthogonal least squares learning algorithm for radial basis function networks.
Chen, S; Cowan, C N; Grant, P M
1991-01-01
The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications.
Simplified method for numerical modeling of fiber lasers.
Shtyrina, O V; Yarutkina, I A; Fedoruk, M P
2014-12-29
A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.
A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.
Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J
2009-11-28
In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.
A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media
Pau, George Shu Heng; Almgren, Ann S.; Bell, John B.; Lijewski, Michael J.
2008-04-01
In this paper we present a second-order accurate adaptive algorithm for solving multiphase, incompressible flows in porous media. We assume a multiphase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting the total velocity, defined to be the sum of the phase velocities, is divergence-free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids areadvanced multiple steps to reach the same time as the coarse grids and the data atdifferent levels are then synchronized. The single grid algorithm is described briefly,but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behavior of the method.
Gerchberg-Saxton algorithm applied to a translational-variant optical setup.
Amézquita-Orozco, Ricardo; Mejía-Barbosa, Yobani
2013-08-12
The standard Gerchberg-Saxton (GS) algorithm is normally used to find the phase (measured on two different parallel planes) of a propagating optical field (usually far-field propagation), given that the irradiance information on those planes is known. This is mostly used to calculate the modulation function of a phase mask so that when illuminated by a plane wave, it produces a known far-field irradiance distribution, or the equivalent, to calculate the phase mask to be used in a Fourier optical system so the desired pattern is obtained on the image plane. There are some extensions of the GS algorithm that can be used when the transformations that describe the optical setup are non-unitary, for example the Yang-Gu algorithm, but these are usually demonstrated using nonunitary translational-invariant optical systems. In this work a practical approach to use the GS algorithm is presented, where raytracing together with the Huygens-Fresnel principle are used to obtain the transformations that describe the optical system, so the calculation can be made when the field is propagated through a translational-variant optical system (TVOS) of arbitrary complexity. Some numerical results are shown for a system where a microscope objective composed by 5 lenses is used. PMID:23938827
Instructive discussion of an effective block algorithm for baryon-baryon correlators
NASA Astrophysics Data System (ADS)
Nemura, Hidekatsu
2016-10-01
We describe an approach for the efficient calculation of a large number of four-point correlation functions for various baryon-baryon (BB) channels, which are the primary quantities for studying the nuclear and hyperonic nuclear forces from lattice quantum chromodynamics. Using the four-point correlation function of a proton- Λ system as a specific example, we discuss how an effective block algorithm significantly reduces the number of iterations. The effective block algorithm is applied to calculate 52 channels of the four-point correlation functions from nucleon-nucleon to Ξ- Ξ, in order to study the complete set of isospin symmetric BB interactions. The elapsed times measured for hybrid parallel computation on BlueGene/Q demonstrate that the performance of the present algorithm is reasonable for various combinations of the number of OpenMP threads and the number of MPI nodes. The numerical results are compared with the results obtained using the unified contraction algorithm for all computed sites of the 52 four-point correlators.
NASA Astrophysics Data System (ADS)
Machnes, S.; Sander, U.; Glaser, S. J.; de Fouquières, P.; Gruslys, A.; Schirmer, S.; Schulte-Herbrüggen, T.
2011-08-01
For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions are pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.
Machnes, S.; Sander, U.; Glaser, S. J.; Schulte-Herbrueggen, T.; Fouquieres, P. de; Gruslys, A.; Schirmer, S.
2011-08-15
For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions are pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.
Numerical modeling of polar mesocyclones generation mechanisms
NASA Astrophysics Data System (ADS)
Sergeev, Dennis; Stepanenko, Victor
2013-04-01
Polar mesocyclones, commonly referred to as polar lows, remain of great interest due to their complicated dynamics. These mesoscale vortices are small short-living objects that are formed over the observation-sparse high-latitude oceans, and therefore, their evolution can hardly be observed and predicted numerically. The origin of polar mesoscale cyclones is still a matter of uncertainty, though the recent numerical investigations [3] have exposed a strong dependence of the polar mesocyclone development upon the magnitude of baroclinicity. Nevertheless, most of the previous studies focused on the individual polar low (the so-called case studies), with too many factors affecting it simultaneously. None of the earlier studies suggested a clear picture of polar mesocyclone generation within an idealized experiment, where it is possible to look deeper into each single physical process. The present paper concentrates on the initial triggering mechanism of the polar mesocyclone. As it is reported by many researchers, some mesocyclones are formed by the surface forcing, namely the uneven distribution of heat fluxes. That feature is common on the ice boundaries [2], where intense air stream flows from the cold ice surface to the warm sea surface. Hence, the resulting conditions are shallow baroclinicity and strong surface heat fluxes, which provide an arising polar mesocyclone with potential energy source converting it to the kinetic energy of the vortex. It is shown in this paper that different surface characteristics, including thermal parameters and, for example, the shape of an ice edge, determine an initial phase of a polar low life cycle. Moreover, it is shown what initial atmospheric state is most preferable for the formation of a new polar mesocyclone or in maintaining and reinforcing the existing one. The study is based on idealized high-resolution (~2 km) numerical experiment in which baroclinicity, stratification, initial wind profile and disturbance, surface
Saccomani, Maria Pia; Audoly, Stefania; Bellu, Giuseppina; D'Angiò, Leontina
2010-04-01
DAISY (Differential Algebra for Identifiability of SYstems) is a recently developed computer algebra software tool which can be used to automatically check global identifiability of (linear and) nonlinear dynamic models described by differential equations involving polynomial or rational functions. Global identifiability is a fundamental prerequisite for model identification which is important not only for biological or medical systems but also for many physical and engineering systems derived from first principles. Lack of identifiability implies that the parameter estimation techniques may not fail but any obtained numerical estimates will be meaningless. The software does not require understanding of the underlying mathematical principles and can be used by researchers in applied fields with a minimum of mathematical background. We illustrate the DAISY software by checking the a priori global identifiability of two benchmark nonlinear models taken from the literature. The analysis of these two examples includes comparison with other methods and demonstrates how identifiability analysis is simplified by this tool. Thus we illustrate the identifiability analysis of other two examples, by including discussion of some specific aspects related to the role of observability and knowledge of initial conditions in testing identifiability and to the computational complexity of the software. The main focus of this paper is not on the description of the mathematical background of the algorithm, which has been presented elsewhere, but on illustrating its use and on some of its more interesting features. DAISY is available on the web site http://www.dei.unipd.it/ approximately pia/.
Saccomani, Maria Pia; Audoly, Stefania; Bellu, Giuseppina; D’Angiò, Leontina
2010-01-01
DAISY (Differential Algebra for Identifiability of SYstems) is a recently developed computer algebra software tool which can be used to automatically check global identifiability of (linear and) nonlinear dynamic models described by differential equations involving polynomial or rational functions. Global identifiability is a fundamental prerequisite for model identification which is important not only for biological or medical systems but also for many physical and engineering systems derived from first principles. Lack of identifiability implies that the parameter estimation techniques may not fail but any obtained numerical estimates will be meaningless. The software does not require understanding of the underlying mathematical principles and can be used by researchers in applied fields with a minimum of mathematical background. We illustrate the DAISY software by checking the a priori global identifiability of two benchmark nonlinear models taken from the literature. The analysis of these two examples includes comparison with other methods and demonstrates how identifiability analysis is simplified by this tool. Thus we illustrate the identifiability analysis of other two examples, by including discussion of some specific aspects related to the role of observability and knowledge of initial conditions in testing identifiability and to the computational complexity of the software. The main focus of this paper is not on the description of the mathematical background of the algorithm, which has been presented elsewhere, but on illustrating its use and on some of its more interesting features. DAISY is available on the web site http://www.dei.unipd.it/~pia/. PMID:20185123